In this post I am providing specific instructions for setting up a data science strategy for CPG and consumer product companies. It also includes the following tools:
- a downloadable template to use as a guide;
- a dataset of use cases for data in a CPG or consumer product company, to give you ideas for your own company.
What is data science in a CPG and consumer product company
When I say data, I define as follows:
- data: quantitative in nature…numbers
- information: qualitative in nature…documents, media like speech, video and images
So when I refer to data, it includes the above.
Then there is data about data.
- knowledge, which I define as synthesizing data to make it more useful (also called analysis)
- Wisdom, which I define the ultimate purpose of data, which is delivering insights from knowledge honed through experience of the practitioner.
So in its purest form, I define data science as obtaining wisdom (or insights) from data.
In the context of a CPG or consumer product company, insights from data ranges from easy to hard.
Analysis of the past, like top line revenue total from last month’s sales, is usually pretty easy. This tells us what happened. Easier analysis which I define to be general calculations like summing or averaging would be thrown into a category called “Standard Analysis” for data science workflows.
But analyzing data to discover patterns and correlations (also called relationships) is harder, while using past data to help us ask “what if” and to help us predict, can be a lot harder, especially if we want it in real-time based on incoming data. Analysis likes correlations, patterns and predictions I throw into a category called “Advanced Analysis” for data science workflows.
And, analysis can be iterative, which means by continuously doing it from new data coming in, our insights could improve.
So, data science is the practice of analyzing data to give us insights from what happened into the past to help us make make future decisions, and done continuously to improve our decision-making potential.
In this context, all CPG and consumer product companies practice data science, especially at the standard analysis level, and in some cases, at the advanced analysis level.
What is a data science strategy for CPG and consumer product companies
A data science strategy is simply a document (information in this case) that defines how data (which is data and information as defined above) – its collection, storage, and general use of – supports the CPG or consumer product company in achieving its vision, mission, culture and goals. I define these elements more clearly and explain how they are related and different from each other in this post.
Why is a data science strategy important
CPG and consumer product companies succeed off of data. Having a documented strategy and process in place for data will help ensure that its value is maximized and nothing is missed or overlooked.
How to develop a data science strategy for CPG and consumer product companies
Identify the data to collect and its governance
Let’s start with our key performance indicators (KPI for short). They are numerical data points that should tell you in real time what is going on in your business.
KPIs that are reporting good values should indicate that you are on the right path to achieving your goals, which, if achieved, help get you to achieving the culture, mission and vision for what you want in your company. There should be congruency between KPI to goals to culture, to mission, and to vision. Bad KPI’s can mean everything after that is at risk.
Quick detour with more about KPI’s
For a CPG or consumer product company, my top KPI’s that I like are:
- top line revenue;
- returns/cancels/decline rates;
- repeat purchasing rates;
- EBIT or net income;
- sales of units per retail door;
- retailer tradespend percentage of top line revenue (all in, including penalties);
- revenue-to-funding ratio and EBIT-to-funding ratio.
And with respect to these KPI’s, I like to see weekly numbers and how they are changing relative to previous week, month, quarter, year and change from last year, and monthly moving averages.
There are many other KPIs, and I track them as well, but for me, they all roll up into the key ones above, so by keeping track of the above ones, I can maintain a good snapshot of the current health of my business with historical context that indicates direction.
This article is not about KPI’s but important to give you some examples so you have clarity on the data to collect. For more on KPIs I use in CPG and consumer product companies, click here.
The KPI’s important to you will point you towards the data to collect and analyze
Your data leads to analysis which leads to your KPI’s which leads to your goals which leads to culture, mission and vision.
So, your first step in a data science strategy is to define, at a minimum, the data you should collect to support these elements.
There may be other required data sets you need to collect for regulatory purposes, so define them as well.
Additional datasets to collect
While defining the minimum data you need to collect is a start, I strongly recommend going much further than that. Define what you would like to have, in an ideal world. You may not be able to get it, but by at least identifying it, you can work towards getting it in the future.
A problem with data is that we tend to work with the data we can most easily obtain and/or easily work with. But we need to think about finding data that is hard to get and/or hard to work with because there may be great nuggets of insight and wisdom to be found from that data. Furthermore, because it might be hard, most other companies won’t do it, which may give us a competitive advantage.
I recommend collecting as much data as possible, because more and more the collection and storage of data is cheap and easy. That does not mean the analysis and the data preparation required for analysis is cheap and easy or maybe not even doable, but as technology advances, it will likely become doable and cheap and easy.
For example, I started collecting detailed customer usage data for one of my business going back to 2006. I knew back then I could not do anything with it, but eventually when machine learning came of age, as it has, then I could make use of it, which I am.
Another example is to record all meetings and calls and use machine learning tools to transcribe to text. Taking notes in summary fashion in calls, which is what I do, often misses important context of what people say, so having text transcriptions for reference is a significant benefit. Further, using natural language processing in machine learning can take volumes of recorded calls and meetings to help potentially unearth key nuggets of information or identify patterns and correlations.
Collect as much data as you can because it could become of significant value in the future through technology.
Protect the business by getting data, information, knowledge and wisdom out of people’s heads
You absolutely do not want people to maintain information in their heads, where they can walk out with it and disrupt the business from their absence. Put policies in place to record standard operating procedures, workflows, and knowledge bases that require people to record everything digitally.
Set access rights
Identify who can access what information.
Identify how available the data needs to be
My general rule is to have realtime and updated access to all data and analysis that rolls up to my KPI’s and be able to do that from my mobile device. Beyond that, set your own policies for how close to realtime data and its analysis must be available. Usually the closer to realtime, the more cost and effort involved.
Identify who collects the data
Identify down to each employee who is responsible for which data to be collected and digitally stored. There are many common datasets that every employee would collect – work emails, files, recorded calls and meeting, workflows, standard operating procedures, knowledgesets – all of which are automatically digitized and stored. But for specific datasets, identify who has responsibility for them and put in place processes to verify that the data is being collected as scheduled in the proper format.
Identify the data storage capabilities
It use to be that you try to identify what you think you will need based on your vision of the size and scope of your company. Put in place what you can afford now and add capacity and capabilities later when you need them.
Now, just go big to start. Cloud storage costs and additional services (like machine learning) are cheap to start even on a small scale and since pricing is based on use, you can start small and easily add storage and services as needed.
It is better to setup scalable infrastructure to start, which is quite inexpensive to do even for a consumer product startup company, so better to get that out the way rather than later face having to move to another system, which can very disruptive to the business.
Put redundancies in place so that data is backed up and backups are easy to access and restore.
Data directory tool
Use this worksheet to document your datasets. Be sure to read the notes inserted into each header field label. This worksheet is a simple and basic approach to using a directory of your data so you can see what you have and the governance around each dataset.
This is a good place to start for any company, startups especially, but it is better to actually build this tool into a relational database so that it becomes a metadata of your data, capturing more information such as semantics, tags, provenance, lineage, evolution of the data, processes that extract, transform and load the data, versioning, when verification’s have been performed to validate the data was collected, and more.
Set regular data science strategy reviews
Set triggers to update the data directory worksheet. For myself and my companies, I use a daily log that contains certain prompts or reminders to remember to record any changes to the data directory so that it stays updated. But those changes have to come through department heads first before being approved. This is another reason to setup a metadata of your data in a database, so that people can access the database at the same time and rules can be setup where department heads must approve changes to the metadata database before going live.
Then, setup a time – a suggestion is monthly – for management to review the data directory worksheet or metadata database to determine if existing datasets support the company strategy or if new datasets need to be collected.
The tactics of the data science strategy
The tactics of a data science strategy would include more about people, skills, roles and responsibilities, software applications, workflows or projects, etc.
Data Science Use Cases
The following dataset includes data science use cases for consumer brands and a repository for AI resources I come across that companies can use.
If you cannot scroll down to the bottom of the page (where you see the footer), click here to open the dataset in a clean HTML page.
To scroll horizontally, click in any cell and use keyboard arrow keys.
Artificial Intelligence Dataset
Date | Summary | Notes | Sectors | Industries | Product Categories | Recreation Categories | Companies | Image | File | Details | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
05/20/2025 |
Building News Agents for Daily News Recaps with MCP, Q, and tmux https://eugeneyan.com/writing/news-agents/?utm_source=tldrai |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
05/20/2025 |
Emerging Developer Patterns for the AI Era Yoko Li share Table of Contents 1. AI-native Git 2. Dashboards -> Synthesis 3. Docs are evolving 4. Templates to generation 5. Beyond .env / secrets management 6. Accessibility as the universal interface 7. The rise of asynchronous agent work 8. MCP as universal standard 9. Abstracted primitives Posted May 7, 2025 Developers are moving past AI as just tooling and starting to treat it as a new foundation for how software gets built. Many of the core concepts we’ve taken for granted — version control, templates, documentation, even the idea of a user — are being rethought in light of agent-driven workflows. As agents become both collaborators and consumers, we expect to see a shift in foundational developer tools. Prompts can be treated like source code, dashboards can become conversational, and docs are written as much for machines as for humans. Model Context Protocol (MCP) and AI-native IDEs point to a deeper redesign of the development loop itself: we’re not just coding differently, we’re designing tools for a world where agents participate fully in the software loop. Below, we explore nine forward-looking developer patterns that, although early, are grounded in real pain points and give a hint of what could emerge. These range from rethinking version control for AI-generated code, to LLM-driven user interfaces and documentation. Let’s dive into each pattern, with examples and insights from the dev community. 1. AI-native Git: Rethinking version control for AI agents Now that AI agents increasingly write or modify large portions of application code, what developers care about starts to change. We’re no longer fixated on exactly what code was written line-by-line, but rather on whether the output behaves as expected. Did the change pass the tests? Does the app still work as intended? This flips a long-standing mental model: Git was designed to track the precise history of hand-written code, but, with coding agents, that granularity becomes less meaningful. Developers often don’t audit every diff — especially if the change is large or auto-generated — they just want to know whether the new behavior aligns with the intended outcome. As a result, the Git SHA — once the canonical reference for “the state of the codebase” — begins to lose some of its semantic value. A SHA tells you that something changed, but not why or whether it’s valid. In AI-first workflows, a more useful unit of truth might be a combination of the prompt that generated the code and the tests that verify its behavior. In this world, the “state” of your app might be better represented by the inputs to generation (prompt, spec, constraints) and a suite of passing assertions, rather than a frozen commit hash. In fact, we might eventually track prompt+test bundles as versionable units in their own right, with Git relegated to tracking those bundles, not just raw source code. Taking this a step further: In agent-driven workflows, the source of truth may shift upstream toward prompts, data schemas, API contracts, and architectural intent. Code becomes the byproduct of those inputs, more like a compiled artifact than a manually authored source. Git, in this world, starts to function less as a workspace and more as an artifact log — a place to track not just what changed, but why and by whom. We may begin to layer in richer metadata, such as which agent or model made a change, which sections are protected, and where human oversight is required – or where AI reviewers like Diamond can step in as part of the loop. To make this more concrete, below is a mock-up of what an AI-native Git flow could look like in practice: 2. Dashboards -> Synthesis: Dynamic AI-driven interfaces For years, dashboards have served as the primary interface for interacting with complex systems such as observability stacks, analytics, cloud consoles (think AWS), and more. But their design often suffers from an overloaded UX: too many knobs, charts, and tabs that force users to both hunt for information and figure out how to act on it. Especially for non-power users or across teams, these dashboards can become intimidating or inefficient. Users know what they want to achieve, but not where to look or which filters to apply to get there. The latest generation of AI models offers a potential shift. Instead of treating dashboards as rigid canvases, we can layer in search and interaction. LLMs can now help users find the right control (“Where can I adjust the rate limiter settings for this API?”); synthesize screen-wide data into digestible insights (“Summarize the error trends across all services in staging over the past 24 hours”); and surface unknown/unknowns (“Given what you know about my business, generate a list of metrics I should pay attention to this quarter”). We are already seeing technical solutions like Assistant UI that make it possible for agents to leverage React components as tools. Just as content has become dynamic and personalized, UI itself can become adaptive and conversational. A purely static dashboard may soon feel outdated next to a natural language-driven interface that reconfigures based on user intent. For example, instead of clicking through five filters to isolate metrics, a user might say, “Show me anomalies from last weekend in Europe,” and the dashboard reshapes to show that view, complete with summarized trends and relevant logs. Or, even more powerfully, “Why did our NPS score drop last week?”, and the AI might pull up survey sentiment, correlate it with a product deployment, and generate a short diagnostic narrative. At a larger scale, if agents are now consumers of software, we may also need to rethink what “dashboards” are or for whom they’re designed. For example, dashboards could render views optimized for agent experience — structured, programmatically accessible surfaces designed to help agents perceive system state, make decisions, and act. This might lead to dual-mode interfaces: one human-facing and one agent-facing, both sharing a common state but tailored to different modes of consumption. In some ways, agents are stepping into roles once filled by alerts, cron jobs, or condition-based automation, but with far more context and flexibility. Instead of pre-wired logic like if error rate > threshold, send alert, an agent might say, “Error rates are rising. Here’s the likely cause, the impacted services, and a proposed fix.” In this world, dashboards aren’t just places to observe; they’re places where both humans and agents collaborate, synthesize, and take action. How dashboards might evolve to support both human and AI-agent viewers. 3. Docs are becoming a combination of tools, indices, and interactive knowledge bases Developer behavior is shifting when it comes to documentation. Instead of reading through a table of contents or scanning top-down, users now start with a question. The mental model is no longer “Let me study this spec”, but “Rework this information for me, in a way I like to consume.” This subtle shift — from passive reading to active querying — is changing what docs need to be. Rather than just static HTML or markdown pages, they’re becoming interactive knowledge systems, backed by indices, embeddings, and tool-aware agents. As a result, we’re seeing the rise of products like Mintlify, which not only structure documentation as semantically searchable databases, but also serve as context sources for coding agents across platforms. Mintlify pages are now frequently cited by AI coding agents — whether in AI IDEs, VS Code extensions, or terminal agents — because coding agents use up-to-date documentation as grounding context for generation. This changes the purpose of docs: they’re no longer just for human readers, but also for agent consumers. In this new dynamic, the documentation interface becomes something like instructions for AI agents. It doesn’t just expose raw content, but explains how to use a system correctly. A screenshot from the Mintlify where users can bring up the AI chat window to do Q&A over Mintlify documentations using the cmd+k shortcut 4. Templates to generation: Vibe coding replaces create-react-app In the past, getting started on a project meant choosing a static template such as a boilerplate GitHub repo or a CLI like create-react-app, next init, or rails new. These templates served as the scaffolding for new apps, offering consistency but little customization. Developers conformed to whatever defaults the framework provided or risked significant manual refactoring. Now, that dynamic is shifting with the emergence of text-to-app platforms like Replit, Same.dev, Loveable, Chef by Convex, and Bolt, as well as AI IDEs like Cursor. Developers can describe what they want (e.g., “a TypeScript API server with Supabase, Clerk and Stripe”) and have a custom project scaffolded in seconds. The result is a starter that’s not generic, but personalized and purposeful, reflecting both the developer’s intent and their chosen stack. This unlocks a new distribution model in the ecosystem. Instead of a few frameworks sitting at the head of the long tail, we may see a wider spread of composable, stack-specific generations where tools and architectures are mixed and matched dynamically. It’s less about picking a framework and more about describing an outcome around which the AI can build a stack. One engineer might create an app with Next.js and tRPC, while another starts with Vite and React, but both get working scaffolds instantly. Of course, there are tradeoffs. Standard stacks bring real advantages, including making teams more productive, improving onboarding, and making troubleshooting easier across orgs. Refactoring across frameworks isn’t just a technical lift; it’s often entangled with product decisions, infrastructure constraints, and team expertise. But what’s shifting is the cost of switching frameworks or starting without one. With AI agents that understand project intent and can execute large refactors semi-autonomously, it becomes much more feasible to experiment — and to reverse course, if needed. This means framework decisions are becoming much more reversible. A developer might start with Next.js, but later decide to migrate to Remix and Vite, and ask the agent to handle the bulk of the refactor. This reduces the lock-in that frameworks used to impose and encourages more experimentation, especially at early stages of a project. It also lowers the bar for trying opinionated stacks, because switching later is no longer a massive investment. 5. Beyond .env: Managing secrets in an agent-driven world For decades, .env files have been the default way for developers to manage secrets (e.g., API keys, database URLs, and service tokens) locally. They’re simple, portable, and developer-friendly. But in an agent-driven world, this paradigm begins to break down. It’s no longer clear who owns the .env when an AI IDE or agent is writing code, deploying services, and orchestrating environments on our behalf. We’re seeing hints of what this could look like. The latest MCP spec, for example, includes an authorization framework based on OAuth 2.1, signaling a possibility to move toward giving AI agents scoped, revocable tokens instead of raw secrets. We can imagine a scenario where an AI agent doesn’t get your actual AWS keys, but instead obtains a short-lived credential or a capability token that lets it perform a narrowly defined action. Another way this could shake out is through the rise of local secret brokers — services running on your machine or alongside your app that act as intermediaries between agents and sensitive credentials. Rather than injecting secrets into .env files or hardcoding them into scaffolds, the agent could request access to a capability (“deploy to staging” or “send logs to Sentry”), and the broker determines whether to grant it — just-in-time, and with full auditability. This decouples secret access from the static filesystem, and makes secret management feel more like API authorization than environment configuration. A CLI mock-up of what the agent-centric secret broker flow could look like. 6. Accessibility as the universal interface: Apps through the eyes of an LLM We’re starting to see a new class of apps (e.g., Granola and Highlight) that request access to accessibility settings on macOS not for traditional accessibility use cases, but to enable AI agents to observe and interact with interfaces. However, this isn’t a hack: It’s a glimpse into a deeper shift. Accessibility APIs were built to help users with vision or motor impairments navigate digital systems. But those same APIs, when extended thoughtfully, may become the universal interface layer for agents. Instead of clicking pixel positions or scraping DOMs, agents could observe applications the way assistive tech does — semantically. The accessibility tree already exposes structured elements like buttons, headings, and inputs. If extended with metadata (e.g., intent, role, and affordance), this could become a first-class interface for agents, letting them perceive and act on apps with purpose and precision. There are a couple potential directions: Context extraction: A standard way for an LLM agent using accessibility or semantic APIs to query what’s on screen, what it can interact with, and what the user is doing. Intentful execution: Rather than expecting an agent to chain multiple API calls manually, expose a high-level endpoint where it can declare goals (“add an item to the cart, choose fastest shipping”), and let the backend figure out the steps. Fallback UI for LLMs: Accessibility features provide a fallback UI for LLMs. Any app that exposes a screen becomes agent-usable, even if it doesn’t have a public API. For developers, it suggests a new “render surface” — not just visual or DOM layers, but agent-accessible context, possibly defined via structured annotations or accessibility-first components. 7. The rise of asynchronous agent work As developers begin to work alongside coding agents more fluidly, we’re seeing a natural shift toward asynchronous workflows where agents operate in the background, pursue parallel threads of work, and report back when they’ve made progress. This mode of interaction is starting to look less like pair programming and more like task orchestration: you delegate a goal, let the agent run, and check in later. Crucially, this isn’t just about offloading effort; it also compresses coordination. Instead of pinging another team to update a config file, triage an error, or refactor a component, developers can increasingly assign that task directly to an agent that acts on their intent and executes in the background. What once required sync meetings, cross-functional handoffs, or long review cycles could become an ambient loop of request, generate, and validate. The surfaces for agent interaction are expanding, too. Instead of always prompting via IDE or CLI, devs could begin to interact with agents by, for example: Sending messages to Slack. Commenting on Figma mocks. Creating inline annotations on code diffs or PRs (e.g. Graphite’s review assistant). Adding feedback based on deployed app previews. Utilizing voice or call-based interfaces, where devs can describe changes verbally. This creates a model where agents are present across the full lifecycle of development. They’re not just writing code, but interpreting designs, responding to feedback, and triaging bugs across platforms. The developer becomes the orchestrator who decides which thread to pursue, discard, or merge. Perhaps this model of branching and delegating to agents becomes the new Git branch — not a static fork of code, but a dynamic thread of intent, running asynchronously until it’s ready to land. 8. MCP is one step closer to becoming a universal standard We recently published a deep dive on MCP. Since then, momentum has accelerated: OpenAI publicly adopted MCP, several new features of the spec were merged, and toolmakers are starting to converge around it as the default interface between agents and the real world. At its core, MCP solves two big problems: It gives an LLM the right set of context to complete tasks it may have never seen. It replaces N×M bespoke integrations with a clean, modular model in which tools expose standard interfaces (servers) usable by any agents (clients). We expect to see broader adoption as remote MCP and a de-facto registry come online. And, over time, apps may begin shipping with MCP surfaces by default. Think of how APIs enabled SaaS products to plug into each other and compose workflows across tools. MCP could do the same for AI agents by turning standalone tools into interoperable building blocks. A platform that ships with an MCP client baked in isn’t just “AI-ready,” but is part of a larger ecosystem, instantly able to tap into a growing network of agent-accessible capabilities. Additionally, MCP clients and servers are logical barriers, not physical boundaries. This means any client can also act as a server, and vice versa. This could theoretically unlock a powerful level of composability via which an agent using an MCP client to consume context can also expose its own capabilities via a server interface. For example, a coding agent could act as a client to fetch GitHub issues, but also register itself as a server that exposes test coverage or code analysis results to other agents. 9. Abstracted primitives: Every AI agent needs auth, billing, and persistent storage As vibe coding agents get more powerful, one thing becomes clear: Agents can generate a lot of code, but they still need something solid to plug into. Just like human developers lean on Stripe for payments, Clerk for auth, or Supabase for database capabilities, agents need similarly clean and composable service primitives to scaffold reliable applications. In many ways, these services — APIs with clear boundaries, ergonomic SDKs, and sane defaults that reduce the chance of failure — are increasingly serving as the runtime interface for agents. If you’re building a tool that generates a SaaS app, you don’t want your agent to roll its own auth system or write billing logic from scratch; you want it to use providers like Clerk and Stripe. As this pattern matures, we may start to see services optimize themselves for agent consumption by exposing not just APIs, but also schemas, capability metadata, and example flows that help agents integrate them more reliably. Some services might even start shipping with MCP servers by default, turning every core primitive into something agents can reason about and use safely out of the box. Imagine Clerk exposing an MCP server that lets an agent query available products, create new billing plans, or update a customer’s subscription — all with permission scopes and constraints defined up front. Instead of hand-authoring API calls or hunting through docs, an agent could say, “Create a monthly ‘Pro’ plan at $49 with usage-based overages,” and Clerk’s MCP server would expose that capability, validate the parameters, and handle the orchestration securely. Just as the early web era needed Rails generators and rails new to move fast, the agent era needs trustworthy primitives — drop-in identity, usage tracking, billing logic, and access control — all abstracted enough to generate against, but expressive enough to grow with the app. Conclusion These patterns point to a broader shift in which new developer behaviors are emerging alongside more capable foundation models. And, in response, we’re seeing new toolchains and protocols like MCP take shape. It’s not just AI layered onto old workflows, but is a redefinition of how software gets built with agents, context, and intent at the core. Many developer-tooling layers are fundamentally shifting, and we are excited to build and invest in the next generation of tools. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
11/25/2024 |
Here’s some Monday motivation for anyone remotely involved in product R&D and sustainable materials: 🌟 Imagine an explosion of new lab-created materials for durable goods and bio-grown ingredients for consumables and body care. Thanks to tools like Google GNoME and Meta OMat24 (both open-source), powered by vast datasets and AI, hundreds of thousands of synthetic materials and ingredients have already moved to experimental synthesis. Thousands could be ready for use in real-world products. 🔬 Democratizing R&D. Picture an all-in-one lab box for material discovery, accessible to companies of any size. Alan Aspuru-Guzik's Matter Lab is leading the way with a modular system that combines advanced software, synthesis, and testing robots. Enter a material description, and the system instantly simulates and tests molecules. This puts the power of cutting-edge material science into anyone's hands. 🌍 A template for transforming industries. Companies like Halcyon are taking AI-powered innovation even further. They’re building platforms to harness vast seas of data, helping energy transition practitioners accelerate the move toward decarbonization. Could their platform could be a blueprint that can transform industries far beyond energy? 🤖 Leveraging AI for Breakthroughs: I’ve started tracking thousands of innovations weekly—an overwhelming task without AI. By pairing AI with my process, I am surfacing breakthroughs that matter most for my work. We’re entering an age where material science, R&D, and product development are becoming accessible to everyone—regardless of company size or expertise. This shift democratizes innovation and levels the playing field. Here's more: NVIDIA's ALCHEMI platform integrates AI with high-performance computing to expedite materials discovery. It offers tools for generating synthetic datasets to train AI models and introduces accelerated materials discovery modules, such as machine learning interatomic potentials (MLIPs), to enhance the efficiency of materials research. AlphaMat: Developed as a material informatics hub, AlphaMat connects data, features, models, and applications. It is designed to handle datasets of varying scales and can design structural and component descriptors effective for various AI models. AlphaMat has been instrumental in discovering thousands of new materials across photonics, batteries, catalysts, and capacitors. NOMAD AI Toolkit: The Novel-Materials-Discovery (NOMAD) Artificial-Intelligence Toolkit is a web-based infrastructure for interactive AI-based analysis of materials science data. It operates on FAIR (Findable, Accessible, Interoperable, and Reusable) data stored in the NOMAD Archive, facilitating data mining, machine learning, and other AI techniques to analyze materials data. Microsoft Azure Quantum Elements: Microsoft's Azure Quantum Elements platform combines AI with traditional high-performance computing and quantum tools for materials science, chemistry, and pharmaceutical research. It utilizes physics-based AI models and advanced algorithms to process complex research data, aiming to accelerate the discovery of new materials. IBM's Materials Discovery: IBM Research is leveraging AI, robotics, hybrid cloud, and quantum computing to accelerate the discovery of new materials. Their goal is to unlock new properties and materials to address global challenges in a significantly reduced timeframe. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
10/30/2024 |
|
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
10/30/2024 |
NotebookLM - https://notebooklm.google/ - upload sources, get instant insights, ask questions. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
10/21/2024 | How AI Startups Can Lead in the Over Crowded AI Market |
How AI Startups Can Lead in the Over Crowded AI Market Jeremiah Owyang Jeremiah Owyang Venture Capital Investor at Blitzscaling Ventures | Llama Lounge event host & Conference Speaker | Works in an Airstream | CrossFit October 20, 2024 Technical moats last 3-6 months; AI startups need these 3 strategies Photo by Erick Davidson I meet AI founders who brag about their technology, but in reality: AI Startups have a 3-6 month technical moat. Their competitors can rapidly copy them and one single tweet from Sam Altman about the next release can eradicate their market lead. Good news, I have some defensible strategies I will share below. Bloody red ocean crowded market: There are nearly 20,000 AI projects according to the latest internet database, Theres an AI for That. Each week, I review dozens—sometimes over 100—of AI startups for my Llama Lounge event series (also see photos) and formal pitches as a VC. Many are undifferentiated, stuck in crowded categories, or destined to be overtaken by tech giants or foundational models by the next tweet thread from Sam Altman or a press release from Google. What I Look For: Baseline and initial screen: team and market analysis. First, the baseline differentiators: team credibility, leadership, background diversity, and the ability to adapt. Does the team have a unique market advantage? Are they creative enough to find new markets instead of competing in a bloody red ocean? Three Business Strategies for AI Startups: Access to Exclusive Data: Over 80% of the world's data is estimated to be behind firewalls—corporate, government, personal clouds. Does the startup have unique and direct access to this untapped data? Do users legally opt in to provide data, and is that data then refined and improved? If B2B, do they have exclusive contracts that grant market superiority? What we're looking for is a unique defensive data moat that no one else has. Product-Led Growth/Viral Effects: Does the product spread naturally without marketing or sales? This is product-led growth. Advanced forms include viral effects and compounding viral loops. On the consumer side, social networks thrive on this as friends invite friends. In B2B, products like DocuSign, Box, and Zoom spread through user invites. The result: the product naturally spreads with little or no marketing, advertising, or sales. Network Effects: When value increases for each new user or client that joins. For example, marketplaces benefit as more buyers and sellers join, creating more options. Another example: app platforms gain as more data, apps, and developers participate. The result: loyalty and high-switching costs with little effort that results in rapid market expansion. Combining all three strategies will likely result in a defensible unique flywheel, and the AI startup can quickly expand (at low cost) to become the defensible leader in their market —it’s not just about technology moats. If attributes 2 and 3 sound familiar, they should—they're the foundation of our investment thesis at Blitzscaling Ventures, as outlined in the best-selling book called Blitzscaling by Chris Yeh (my friend for 20 years and now my business partner) and Reid Hoffman (our major LP and special advisor). We filter and score startups based on these factors and can quickly spot which companies have what it takes to truly stand out. Read more about the blitzscaling scorecard on this recent blog post by Chris Yeh. I may dig into this in a future post: I assume that AI entities are creating AI projects. which will even further flood the market. PS: this is why I call them “AI Projects” as they may not be registered companies led by humans. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
10/14/2024 |
Generative AI's o1 Act |
wo years into the Generative AI revolution, research is progressing the field from “thinking fast”—rapid-fire pre-trained responses—to “thinking slow”— reasoning at inference time. This evolution is unlocking a new cohort of agentic applications. On the second anniversary of our essay “Generative AI: A Creative New World,” the AI ecosystem looks very different, and we have some predictions for what’s on the horizon. The foundation layer of the Generative AI market is stabilizing in an equilibrium with a key set of scaled players and alliances, including Microsoft/OpenAI, AWS/Anthropic, Meta and Google/DeepMind. Only scaled players with economic engines and access to vast sums of capital remain in play. While the fight is far from over (and keeps escalating in a game-theoretic fashion), the market structure itself is solidifying, and it’s clear that we will have increasingly cheap and plentiful next-token predictions. As the LLM market structure stabilizes, the next frontier is now emerging. The focus is shifting to the development and scaling of the reasoning layer, where “System 2” thinking takes precedence. Inspired by models like AlphaGo, this layer aims to endow AI systems with deliberate reasoning, problem-solving and cognitive operations at inference time that go beyond rapid pattern matching. And new cognitive architectures and user interfaces are shaping how these reasoning capabilities are delivered to and interact with users. What does all of this mean for founders in the AI market? What does this mean for incumbent software companies? And where do we, as investors, see the most promising layer for returns in the Generative AI stack? In our latest essay on the state of the Generative AI market, we’ll explore how the consolidation of the foundational LLM layer has set the stage for the race to scale these higher-order reasoning and agentic capabilities, and discuss a new generation of “killer apps” with novel cognitive architectures and user interfaces. Strawberry Fields ForeverThe most important model update of 2024 goes to OpenAI with o1, formerly known as Q* and also known as Strawberry. This is not just a reassertion of OpenAI’s rightful place atop the model quality leaderboards, but also a notable improvement on the status quo architecture. More specifically, this is the first example of a model with true general reasoning capabilities, which they’ve achieved with inference-time compute. What does that mean? Pre-trained models are doing next token prediction on an enormous amount of data. They rely on “training-time compute.” An emergent property of scale is basic reasoning, but this reasoning is very limited. What if you could teach a model to reason more directly? This is essentially what’s happening with Strawberry. When we say “inference-time compute” what we mean is asking the model to stop and think before giving you a response, which requires more compute at inference time (hence “inference-time compute”). The “stop and think” part is reasoning. AlphaGo x LLMsSo what is the model doing when it stops and thinks? Let’s first take a quick detour to March 2016 in Seoul. One of the most seminal moments in deep learning history took place here: AlphaGo’s match against legendary Go master Lee Sedol. This wasn’t just any AI-vs-human match—it was the moment the world saw AI do more than just mimic patterns. It was thinking. What made AlphaGo different from previous gameplay AI systems, like Deep Blue? Like LLMs, AlphaGo was first pre-trained to mimic human experts from a database of roughly 30 million moves from previous games and more from self-play. But rather than provide a knee jerk response that comes out of the pre-trained model, AlphaGo takes the time to stop and think. At inference time, the model runs a search or simulation across a wide range of potential future scenarios, scores those scenarios, and then responds with the scenario (or answer) that has the highest expected value. The more time AlphaGo is given, the better it performs. With zero inference-time compute, the model can’t beat the best human players. But as the inference time scales, AlphaGo gets better and better—until it surpasses the very best humans. Let’s bring it back to the LLM world. What’s hard about replicating AlphaGo here is constructing the value function, or the function by which the responses are scored. If you’re playing Go, it’s more straightforward: you can simulate the game all the way to the end, see who wins, and then calculate an expected value of the next move. If you’re coding, it’s somewhat straightforward: you can test the code and see if it works. But how do you score the first draft of an essay? Or a travel itinerary? Or a summary of key terms in a long document? This is what makes reasoning hard with current methods, and it’s why Strawberry is comparatively strong on domains proximate to logic (e.g. coding, math, the sciences) and not as strong in domains that are more open-ended and unstructured (e.g. writing). While the actual implementation of Strawberry is a closely guarded secret, the key ideas involve reinforcement learning around the chains of thought generated by the model. Auditing the model’s chains of thought suggests that something fundamental and exciting is happening that actually resembles how humans think and reason. For example, o1 is showing the ability to backtrack when it gets stuck as an emergent property of scaling inference time. It is also showing the ability to think about problems the way a human would (e.g. visualize the points on a sphere to solve a geometry problem) and to think about problems in new ways (e.g. solving problems in programming competitions in a way that humans would not). And there is no shortage of new ideas to push forward inference-time compute (e.g. new ways of calculating the reward function, new ways of closing the generator/verifier gap) that research teams are working on as they try to improve the model’s reasoning capabilities. In other words, deep reinforcement learning is cool again, and it’s enabling an entire new reasoning layer. System 1 vs System 2 ThinkingThis leap from pre-trained instinctual responses (”System 1”) to deeper, deliberate reasoning (“System 2”) is the next frontier for AI. It’s not enough for models to simply know things—they need to pause, evaluate and reason through decisions in real time. Think of pre-training as the System 1 layer. Whether a model is pre-trained on millions of moves in Go (AlphaGo) or petabytes of internet-scale text (LLMs), its job is to mimic patterns—whether that’s human gameplay or language. But mimicry, as powerful as it is, isn’t true reasoning. It can’t properly think its way through complex novel situations, especially those out of sample. This is where System 2 thinking comes in, and it’s the focus of the latest wave of AI research. When a model “stops to think,” it isn’t just generating learned patterns or spitting out predictions based on past data. It’s generating a range of possibilities, considering potential outcomes and making a decision based on reasoning. For many tasks, System 1 is more than enough. As Noam Brown pointed out on our latest episode of Training Data, thinking for longer about what the capital of Bhutan is doesn’t help—you either know it or you don’t. Quick, pattern-based recall works perfectly here. But when we look at more complex problems—like breakthroughs in mathematics or biology—quick, instinctive responses don’t cut it. These advances required deep thinking, creative problem-solving and—most importantly—time. The same is true for AI. To tackle the most challenging, meaningful problems, AI will need to evolve beyond quick in-sample responses and take its time to come up with the kind of thoughtful reasoning that defines human progress. ![]() ![]() A New Scaling Law: The Inference Race is OnThe most important insight from the o1 paper is that there’s a new scaling law in town. Pre-training LLMs follows a well understood scaling law: the more compute and data you spend on pre-training the model, the better it performs. The o1 paper has opened up an entire new plane for scaling compute: the more inference-time (or “test-time”) compute you give the model, the better it reasons. ![]() Source: OpenAI o1 technical report What happens when the model can think for hours? Days? Decades? Will we solve the Riemann Hypothesis? Will we answer Asimov’s last question? This shift will move us from a world of massive pre-training clusters toward inference clouds—environments that can scale compute dynamically based on the complexity of the task. One Model to Rule Them All?What happens as OpenAI, Anthropic, Google and Meta scale their reasoning layers and develop more and more powerful reasoning machines? Will we have one model to rule them all? One hypothesis at the outset of the Generative AI market was that a single model company would become so powerful and all-encompassing that it would subsume all other applications. This prediction has been wrong so far in two ways. First, there is plenty of competition at the model layer, with constant leapfrogging for SOTA capabilities. It’s possible that someone figures out continuous self-improvement with broad domain self play and achieves takeoff, but at the moment we have seen no evidence of this. Quite to the contrary, the model layer is a knife-fight, with price per token for GPT-4 coming down 98% since the last dev day. Second, the models have largely failed to make it into the application layer as breakout products, with the notable exception of ChatGPT. The real world is messy. Great researchers don’t have the desire to understand the nitty gritty end-to-end workflows of every possible function in every possible vertical. It is both appealing and economically rational for them to stop at the API, and let the developer universe worry about the messiness of the real world. Good news for the application layer. The Messy Real World: Custom Cognitive ArchitecturesThe way you plan and prosecute actions to reach your goals as a scientist is vastly different from how you would work as a software engineer. Moreover, it’s even different as a software engineer at different companies. As the research labs further push the boundaries on horizontal general-purpose reasoning, we still need application or domain-specific reasoning to deliver useful AI agents. The messy real world requires significant domain and application-specific reasoning that cannot efficiently be encoded in a general model. ![]() Enter cognitive architectures, or how your system thinks: the flow of code and model interactions that takes user input and performs actions or generates a response. For example, in the case of Factory, each of their “droid” products has a custom cognitive architecture that mimics the way that a human thinks to solve a specific task, like reviewing pull requests or writing and executing a migration plan to update a service from one backend to another. The Factory droid will break down all of the dependencies, propose the relevant code changes, add unit tests and pull in a human to review. Then after approval, run the changes across all of the files in a dev environment and merge the code if all the tests pass. Just like how a human might do it—in a set of discrete tasks rather than one generalized, black box answer. What’s Happening with Apps?Imagine you want to start a business in AI. What layer of the stack do you target? Do you want to compete on infra? Good luck beating NVIDIA and the hyperscalers. Do you want to compete on the model? Good luck beating OpenAI and Mark Zuckerberg. Do you want to compete on apps? Good luck beating corporate IT and global systems integrators. Oh. Wait. That actually sounds pretty doable! Foundation models are magic, but they’re also messy. Mainstream enterprises can’t deal with black boxes, hallucinations and clumsy workflows. Consumers stare at a blank prompt and don’t know what to ask. These are opportunities in the application layer. Two years ago, many application layer companies were derided as “just a wrapper on top of GPT-3.” Today those wrappers turn out to be one of the only sound methods to build enduring value. What began as “wrappers” have evolved into “cognitive architectures.” Application layer AI companies are not just UIs on top of a foundation model. Far from it. They have sophisticated cognitive architectures that typically include multiple foundation models with some sort of routing mechanism on top, vector and/or graph databases for RAG, guardrails to ensure compliance, and application logic that mimics the way a human might think about reasoning through a workflow. Service-as-a-SoftwareThe cloud transition was software-as-a-service. Software companies became cloud service providers. This was a $350B opportunity. ![]() What does it mean to sell work? Sierra is a good example. B2C companies put Sierra on their website to talk with customers. The job-to-be-done is to resolve a customer issue. Sierra gets paid per resolution. There is no such thing as “a seat”. You have a job to be done. Sierra does it. They get paid accordingly. This is the true north for many AI companies. Sierra benefits from having a graceful failure mode (escalation to a human agent). Not all companies are so lucky. An emerging pattern is to deploy as a copilot first (human-in-the-loop) and use those reps to earn the opportunity to deploy as an autopilot (no human in the loop). GitHub Copilot is a good example of this. A New Cohort of Agentic ApplicationsWith Generative AI’s budding reasoning capabilities, a new class of agentic applications is starting to emerge. What shape do these application layer companies take? Interestingly, these companies look different than their cloud predecessors:
We are seeing a new cohort of these agentic applications emerge across all sectors of the knowledge economy. Here are some examples.
By bringing the marginal cost of delivering these services down—in line with the plummeting cost of inference—these agentic applications are expanding and creating new markets. Take XBOW, for example. XBOW is building an AI “pentester.” A “pentest” or penetration test is a simulated cyberattack on a computer system that companies perform in order to evaluate their own security systems. Before Generative AI, companies hired pentesters only in limited circumstances (e.g. when required for compliance), because human pentesting is expensive: it’s a manual task performed by a highly skilled human. However, XBOW is now demonstrating automated pentests built on the latest reasoning LLMs that match the performance of the most highly skilled human pentesters. This multiplies the pentesting market and opens up the possibility of continuous pentesting for companies of all shapes and sizes. What does this mean for the SaaS universe?Earlier this year we met with our Limited Partners. Their top question was “will the AI transition destroy your existing cloud companies?” We began with a strong default of “no.” The classic battle between startups and incumbents is a horse race between startups building distribution and incumbents building product. Can the young companies with cool products get to a bunch of customers before the incumbents who own the customers come up with cool products? Given that so much of the magic in AI is coming from the foundation models, our default assumption has been no—the incumbents will do just fine, because those foundation models are just as accessible to them as they are to the startup universe, and they have the preexisting advantages of data and distribution. The primary opportunity for startups is not to replace incumbent software companies—it’s to go after automatable pools of work. That being said, we are no longer so sure. See above re: cognitive architectures. There’s an enormous amount of engineering required to turn the raw capabilities of a model into a compelling, reliable, end-to-end business solution. What if we’re just dramatically underestimating what it means to be “AI native”? Twenty years ago the on-prem software companies scoffed at the idea of SaaS. “What’s the big deal? We can run our own servers and deliver this stuff over the internet too!” Sure, conceptually it was simple. But what followed was a wholesale reinvention of the business. EPD went from waterfalls and PRDs to agile development and AB testing. GTM went from top-down enterprise sales and steak dinners to bottoms-up PLG and product analytics. Business models went from high ASPs and maintenance streams to high NDRs and usage-based pricing. Very few on-prem companies made the transition. What if AI is an analogous shift? Could the opportunity for AI be both selling work and replacing software? With Day.ai, we have seen a glimpse of the future. Day is an AI native CRM. Systems integrators make billions of dollars configuring Salesforce to meet your needs. With nothing but access to your email and calendar and answers to a one-page questionnaire, Day automatically generates a CRM that is perfectly tailored to your business. It doesn’t have all the bells and whistles (yet), but the magic of an auto-generated CRM that remains fresh with zero human input is already causing people to switch. The Investment UniverseWhere are we spending our cycles as investors? Where is funding being deployed? Here’s our quick take. Infrastructure This is the domain of hyperscalers. It’s being driven by game theoretic behavior, not microeconomics. Terrible place for venture capitalists to be. Models This is the domain of hyperscalers and financial investors. Hyperscalers are trading balance sheets for income statements, investing money that’s just going to round-trip back to their cloud businesses in the form of compute revenue. Financial investors are skewed by the “wowed by science” bias. These models are super cool and these teams are incredibly impressive. Microeconomics be damned! Developer tools and infrastructure software Less interesting for strategics and more interesting for venture capitalists. ~15 companies with $1Bn+ of revenue were created at this layer during the cloud transition, and we suspect the same could be true with AI. Apps The most interesting layer for venture capital. ~20 application layer companies with $1Bn+ in revenue were created during the cloud transition, another ~20 were created during the mobile transition, and we suspect the same will be true here. ![]() Closing ThoughtsIn Generative AI’s next act, we expect to see the impact of reasoning R&D ripple into the application layer. These ripples are fast and deep. Most of the cognitive architectures to date incorporate clever “unhobbling” techniques; now that these capabilities are becoming baked deeper into the models themselves, we expect that agentic applications will become much more sophisticated and robust, quickly. Back in the research lab, reasoning and inference-time compute will continue to be a strong theme for the foreseeable future. Now that we have a new scaling law, the next race is on. But for any given domain, it is still hard to gather real-world data and encode domain and application-specific cognitive architectures. This is again where last-mile app providers may have the upper hand in solving the diverse set of problems in the messy real world. Thinking ahead, multi-agent systems, like Factory’s droids, may begin to proliferate as ways of modeling reasoning and social learning processes. Once we can do work, we can have teams of workers accomplishing so much more. What we’re all eagerly awaiting is Generative AI’s Move 37, that moment when – like in AlphaGo’s second game against Lee Sedol – a general AI system surprises us with something superhuman, something that feels like independent thought. This does not mean that the AI “wakes up” (AlphaGo did not) but that we have simulated processes of perception, reasoning and action that the AI can explore in truly novel and useful ways. This may in fact be AGI, and if so it will not be a singular occurrence, it will merely be the next phase of technology. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
03/03/2024 |
Interesting prompt: generate an image of _____ that draws on all information you have and any stereotypical views |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/28/2024 |
Data Science KN - old tool my eyes only: https://docs.google.com/document/d/11OcJssHacbNtJRujpo9RWJEUdApK2o3E8C08VQv5BJY/edit?folder=1RyPWFuYjZUOR5HEt4c5VC8DATdpGIPNZ#heading=h.dwp6nxmmg4ct |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/26/2024 |
ChatGPT Prompts for Customer Personas |
ChatGPT Prompts for Customer PersonasJANUARY 22, 2024 • ANN SMARTYGenerative AI platforms such as ChatGPT have access to much data to respond to our needs. Marketers can use that info for many purposes, including understanding their audiences and creating buyer personas. Here are three examples. Personas for Search QueriesIt’s difficult to force ChatGPT to search Google. But it does search Bing. A handy audience prompt is to instruct ChatGPT to analyze search results (i.e., Bing) and reply with the target persona of each page-1 listing. Here’s my prompt for the query “buy laptops”: Search Bing with “buy laptops” and provide detailed target personas for each page-1 organic listing. Create a table of each persona and include ideas on better targeting that type of buyer. Here’s the response, which was helpful: “Based on the search results for ‘buy laptops,’ we can identify several buyer personas for laptop customers. These personas can be categorized based on their needs, preferences, and purchasing behavior. Here’s a table outlining these personas and strategies to target them effectively.” I repurposed the table for this article.
I have developed a custom GPT to streamline this type of research — just enter your keyword. Competitors’ PersonasChatGPT is an excellent competitor research tool. It can identify their good and missing tactics. I first searched Google for the top-ranking listing (“Wired” magazine article) from my query. ChatGPT can access URLs unless they block its crawler, which would require pasting the entire text into the prompt. Here’s my prompt. This URL is from my organic search competitor: [enter URL]. Analyze the page and tell me the targeted buyer personas. Create a table listing each persona and the page elements or wording that informed your conclusions. Here’s ChatGPT’s response: “Based on the provided text, it’s possible to identify several buyer personas that the article on ‘How to Buy the Right Laptop for You’ targets. I’ll outline these personas in a table, along with the page elements or wording that suggest these are the target audiences.” Again, I repurposed the table for this article.
Evidently this article is targeting multiple audiences, something to keep in mind when I’m composing my own. I could repeat this exercise for other top-ranking URLs for my keyword. Reddit PersonasChatGPT cannot access Reddit. But I still prompt an analysis of Reddit content by downloading a PDF page from that site — “Print” > “Save as PDF” in your browser — and uploading it to ChatGPT. Thus a prompt including a Reddit thread about your company or a competitor is possible. For a test, I uploaded a PDF of a Reddit thread titled “I’m looking to buy a laptop and I’m confused.” Here’s my prompt: Analyze the Reddit discussion in the attached PDF and extract user personas from it. Create a table listing the personas, likely interests, and how to best approach and engage them. And here’s the response: “Based on the excerpt from the ‘I’m looking to buy a laptop and I’m confused’ discussion, we can analyze and extract user personas for members likely to be active in such a community. This analysis focuses on the types of users who are engaged in conversations about laptop choices, particularly those considering factors like processor type, battery performance, and cost.” Here’s the repurposed table:
Next PromptsOnce ChatGPT identifies personas and content ideas, prompt it to:
|
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/12/2024 |
My favorite 5 prompts for marketers using ChatGPT. - Neil Patel |
My favorite 5 prompts for marketers using ChatGPT. Prompt #1: Please write a blog post about [insert the topic here] and use stats and data throughout the article while citing your sources as well as write the introduction in a storytelling format. By having ChatGPT add storytelling in your introductory paragraph you will be able to keep the reader more engaged. And when you add in stats and data, it helps with credibility. Plus this format also makes it easier for me to modify the content. Prompt #2: Please share interesting stats about [insert the topic here] that very few people are talking about. Please cite your sources. This prompt gives me ideas for social content, blog posts, and podcast episodes. Prompt #3: What topics does [insert competitor URL] discuss the most frequently? This is one of the easiest ways to figure out what your competitors talk about. It will give you specific ideas of what to blog about. Prompt #4: What are the most popular social posts about [insert the topic] based on likes and shares? A great prompt to find out what kind of social content you can try posting. Prompt #5: What are the most popular blog posts on [competitor URL]? |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
12/30/2023 | view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
12/10/2023 |
https://dexa.ai/ LLM ai whose data is based on podcast talks. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
12/05/2023 |
AI companionship |
|
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
11/27/2023 |
More on AI assistants/doppelgangers. If everyone has their own, including a business (which could be a doppelganger of the employees or a generic one representing the business), then interactions between customers and businesses will be between the AI's. If I have a problem with the order, I will let my AI handle it. Since the AI's are all trained on LLMs, they will be interacting and doing tasks as a result though LLM. Think of LLM's as the new APIs. And the beauty is that humans can jump in at anytime and understand what the AI's are doing because they are using LLM's and human language as the medium to interact. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
11/23/2023 |
85% of consumers not interested in using AI to make online purchases (2 minute read)85% of consumers not interested in using AI to make online purchases (2 minute read) There is a stark disconnect between what brands believe consumers want and what they really need from their online shopping experience. 85% of consumers aren’t interested in using AI to help them decide on purchases. Additionally, if offered an AI recommendation, 60% said it wouldn’t make them more likely to buy something. 17% said it would, in fact, make them less likely to make a purchase. |
Me here: this makes perfect sense now but in the years to come, it will shift. As we add AI personal assistants that can look at all our past data, what we did, where we want, who we did it with, these assistants will know us and become us (if we let it), and then be able to do things for us in place of us, including shopping. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
11/17/2023 |
AI-powered recall - a new term in this article that I included in the Alexandria Project document. |
The Plain View People know Otter.ai as one of the AI-driven transcription services that have popped up over the past few years, automatically converting spoken words in interviews and meetings into text. The service can even distinguish between individual speakers. But its CEO, Sam Liang, sees this handy functionality as just a beachhead into a more sweeping and provocative project: Capturing everything you hear into a master dataset where you can search and reexperience every conversation you’ve ever had. Liang began thinking about this a decade ago, after he left a job at Google to cofound a startup that monitored people’s behavior on mobile devices to provide services like automatically tracking mileage expenses. “I’m obsessed with getting data and understanding data,” he confesses. “In my first startup, we used a lot of iPhone sensors: location, GPS, Wi-Fi, motion. The one sensor we didn't use was the microphone.” Fixing that would be transformational, he thought. “I was frustrated that with Gmail I could search for something from 10 years ago, but there was no way to search for something I heard three hours ago,” he says. “So I did a thought experiment. What if I keep my microphone on the whole day?” Liang then raised the stakes still further. “What if I do it even better—what if I kept the mic on all the time, my entire life—from the day I started talking till the day I die?” He calculated how much data that would be and figured out that you could store a lifetime of audio on a 2-terabyte USB drive. “Then I can search for everything I heard in my whole life,” he says. “My parents have already died. I really wish I can just retrieve their speech.” Liang isn’t the only one chasing the dream—or perhaps nightmare—of total AI-powered recall. As I wrote in February 2021, a startup called Rewind has already launched with the promise of life-capture, and it has since tapped the latest AI advances to build out that vision. Founder Dan Siroker recently announced a wearable pendant to more nimbly snare everything within electronic earshot. And just this month a much ballyhooed new startup called Humane announced a replacement for the smartphone in the form of a “pin” that can also capture voice. These products join countless devices like Amazon’s Alexa with microphones always at the ready, potentially fertile ground for apps that can passively record. Maybe the rise of generative AI marks the inflection point for this idea. Using that technology, recording corpuses can become datasets by which people can search through and summarize their life events, and literally dialog with the minutiae of their existence. It might be like having your personal Robert Caro–level biographer on hand. As you might expect, civil libertarians have some issues with this concept. Jay Stanley, a senior policy analyst for the American Civil Liberties Union, says that the rise of always-on audio capture raises tensions between personal privacy and the right to record. But he mostly worries about how all that data might be used against people, even if originally intended to enhance their memories. “The prospect raises questions about whether the data will be protected, whether it will be vulnerable to hacking, and whether it can be vulnerable to access by the government,” he says. Overall, he thinks services that record all your conversations are a bad idea. “People might feel like it's empowering to have a record of everything they've ever heard, like a super memory or something like that. But it could actually be disempowering and turn against you.” Not surprisingly, Liang and Siroker both insist that privacy is built into their systems. Both say that they discourage recording anyone without consent. And of course they vouch for the security of their systems. Just like email is now a staple in legal cases, those recordings would inevitably find their way into courtrooms. Both Otter and Rewind say that they would comply with official subpoenas. Liang says that it would be good to have new legislation that raises the bar of what authorities could ask for in a probe. The ACLU’s Stanley says that this wouldn’t be unprecedented. “Under British common law, it used to be the case that even with a warrant the authorities couldn't access your diary or your letters, because that was viewed as making you testify against yourself.” Unfortunately, that’s not current law, but Liang wants to revive and spread the concept. He also thinks we should nix state laws that mandate mutual consent if one party in a conversation wants to record. “I am not a politician, but the eavesdropping law is crazy,” he says. “Fifty or 60 years ago, the intention was good, but now things have evolved tremendously with the AI revolution and ChatGPT.” In his view, you shouldn’t be able to share someone else’s words without permission, but if you can hear something, you should be able to retain it. “Everyone has the right to remember,” he says. An even thornier question is whether people who speak to the always-on subscription services will be comfortable with the concept of having all their conversations recorded. People may reject this idea as thoroughly as they did Google Glass, incensed at its ability to passively record video. “This may be determined not in the realm of law, but in the realm of etiquette,” says Stanley. “Do I really want to sit down with my old college friend and have a night at a bar talking about old times when he's recording everything?” If the practice of total recording in pursuit of total recall does catch on, it will be due to AI’s ability to provide tangible benefits from exploiting a lifetime corpus of conversation—or even just a digital log of what you forgot from that recent night on the town. Right now, the cutting edge of this phenomenon occurs in business meetings. Because so many now include remote participants, recording the session is a one-button choice that is increasingly invoked. (Liang says Otter has transcribed over a billion meetings.) In the workplace, at least, the etiquette is established. And those recordings are becoming more useful. Generative AI now provides a huge array of features. With Otter, meetings can be summarized in real time with reasonable accuracy, and chatbots allow participants and latecomers to ask whether a specific issue was addressed. During a meeting you can ask whether a topic had been previously discussed. And when the meeting is over, the results can be dumped into a corpus that’s an invaluable guide to a company’s operations. For Liang that’s just the beginning. He tells me Otter is working on a feature called Avatar that would enable him to run a meeting without the bother of actually attending. It’s essentially a chatbot built around years of past data on his contributions in meetings. “I’m often double-booked, so for those meetings I can send my avatar, which can answer probably 90 percent of the questions people ask me,” he says. I ask him if that might be risky—what if the avatar OKs a business plan that tanks the company? “We’d give it only a certain level of authority,” he says after considering the concept. “Hey, maybe you can approve anything less than $10,000.” Remind me to ask Liang’s avatar to cut me a check next time we’re in the same Zoom room. Certainly some of the features useful in business meetings could be applied in our personal lives. Chatbots might take on eternal questions like “What went wrong in my marriage?” and deliver shockingly accurate answers. But would we want this? In general, a lot of what we say might not look good if it were recorded and somehow leaked into the open. In a world where everything was recorded, people might feel constrained in expressing themselves freely. Siroker thinks that won’t happen, because AI will be so good at faking someone’s voice that everyone will be able to plausibly deny any faux pas or criminal utterance. That sounds to me like the new version of denying responsibility for a toxic social media post by claiming your account was hacked. It didn’t work too well for Anthony Weiner. Ideally, the public will have a reasoned discussion about balancing the loss of privacy from having everything recorded with the benefits of having instant access to our verbal history and most replayable personal moments. But I suspect we won’t get to vote on it. Like everything else that happens in technology, it’s more likely that companies will offer it preemptively. Being on the record all the time might win favor by allowing us to recover a brilliant idea we had at lunch or in the shower, replay conversations with departed loved ones, and settle disputes on who agreed to do the dishes. Before we know it, we might be hooked. Too bad—it would have been nice to have that discussion about balance. And, of course, to record it. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
11/08/2023 |
ChatGPT Beginners Cheat Sheet |
![]() |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
09/14/2023 |
Top chatGPT keywords |
![]() |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
08/24/2023 |
AI notes and good thoughts already integrated into predictions and elsewhere but including here for searchability |
August 2023 Newsletter: Six AI Themes to Consider August 23, 2023 Newsletter Overview Latest Articles: The Hidden Costs of Reshoring Now Available: Broken Money One of the big trends in markets throughout 2023 has been artificial intelligence, or AI. I tend to avoid jumping into crowded themes (sometimes to my benefit, sometimes to my detriment), so AI is not something I have written much about publicly. I prefer to absorb new trends for a while and think about them for a while, rather than come out with a hot take right away. I did cover the topic in a May premium report and then again partially in a July premium report. It has been a topic I’ve been following closely despite not writing heavily about yet, and this issue will draw from those prior reports and more recent information to cover some of the themes that I think are worthwhile to think about, that I don’t see getting a lot of coverage. And as a housekeeping note, I recently published a book called Broken Money. You can read more it about here or buy it on Amazon here. AI Themes The takes on AI have ranged from “this is all hype and there’s nothing here” to “it’s going to kill us all in five years and take over the world”. Unsurprisingly, I fall somewhere between those two extremes, where I think there are clearly major breakthroughs here but any emergent technology will be prone to hype cycles and will get ahead of itself from time to time. Anyone who has taken the time to use some of the latest tools can see the value here, which didn’t exist a few years ago. As the technology matures and becomes ubiquitous, of course it’ll have a significant economic impact. However, any exponential technology is hard to map out with a long timeline. But a useful exercise is to figure out what the right questions are, even if we don’t necessarily know all of the answers to those questions. There’s nothing worse than not even knowing what questions to ask, or what the potential bounds or bottlenecks are with regards to what is otherwise a highly complex topic. We must strive to have at least a basic map of the terrain, in other words, even if there are a lot of uncharted areas. I’m not much of a programmer (the last time I wrote any code at all was probably around 2014), so any views I have on it are more from the engineering hardware side or from an economics angle, rather than talking about anything software specific. And what I’m mainly going to be trying to do is to figure out some of the right questions to think about, and to start mapping out some of the terrain to see where the bottlenecks or inflection points might be. Theme 1) Technology Through Step-Change When we think of technological development, we often imagine a smooth exponential line upwards. In reality, technological development is mostly a series of overlapping step-changes. Certain technological breakthroughs occur, cause a multi-decade technological boom, and then run into a hard ceiling and stagnate, until the next breakthrough emerges. That’s why there are often science fiction stories set at a certain date and by the time we actually reach that date, our world isn’t as advanced as the writer expected (e.g. the 1968 book Do Androids Dream of Electric Sleep? was set in 1992, and its 1982 film adaption Blade Runner was set in 2019). The reason this is relevant is because without the emergence of a new breakthrough, technology within a given industry often underperforms our expectations (if we assume a continuous growth model). In contrast, when there is a new breakthrough, technology within a given industry can easily outperform our expectations and change things more quickly than we expect. Often, the discovery and application of a more concentrated energy source serves as a big step-change. The invention of the steam engine and the associated mining of coal ushered in the industrial revolution, which then was compounded by the invention of the internal combustion engine and the associated production of oil. This gave humanity enough energy density for high-speed travel, broadly available running water, and the mechanization of agriculture. All of these things freed up most people to leave subsistence farming and to work on other things, and to have more sanitary living conditions, which led to a boom in industrial productivity and lifespans. In other words, a large portion of what we attribute to technology, is encapsulated in this chart: Global Energy Consumption In the medical industry, the discovery of germs and the associated invention of antibacterial medicine, the invention of anesthesia and various surgical techniques, and the invention of vaccines against diseases like Polio massively increased survivability into old age during the early 20th century. Eventually our capabilities kind of stagnated, as we haven’t really increased the maximum human lifespan but have increased the percentage of people that make it into their 70s and older. One day, perhaps gene editing and various age-reversing technologies will create another big step-change toward much longer maximum lifespans. Humans spent thousands of years wanting to fly without much progress at all. In the 1700s, inventors made the hot air balloon, followed by hydrogen/helium airships, and then stagnated for a while at that level. But then, the combination of hydrocarbon production and aluminum production (both which began at commercial scale in the 1800s) finally allowed humans to create a true flying machine in the form of an airplane at the start of the 1900s. Due to the technological boom that ensued from that, just decades later we had safe commercial transcontinental flights and human space exploration. But then the technology ran into a practical ceiling and development slowed down, with an inability to maintain affordable supersonic commercial flight (e.g. the discontinuation of the Concorde), and an inability to maintain the prior pace of human space exploration. The only period where humans traveled beyond low Earth orbit was in a brief window between 1968 and 1972, and now over five decades later, no human has traveled beyond low Earth orbit since that last time in 1972. And think for a moment about the fact that a civilian had the option to board a supersonic aircraft in the 1980s, but has no option to do so today in the 2020s. The discovery of electricity and the application of it, along with telecommunication systems, was another massive source of innovation which further automated our surroundings and connected the whole world together. This also had a big impact on money: the telegraph enabled transactions to move around the world at the speed of light, which eventually played a role in changing the type of money we use all around the world. The invention of semiconductors then compounded this, giving us science-fiction level abilities to communicate with each other peer-to-peer through air, using supercomputers that we keep in our pockets. Rather than rely on buttons, nearly the entire surface of each of these pocket supercomputers serves as a controllable glass window into a globally-connected digital realm. Imagine describing this technology to your great-grandparents. Smartphone Artificial intelligence has been with us for a while, since the mid-20th century. And for over two decades, chess programs have been able to beat the best human chess players. At first it required a large supercomputer to beat top human grandmasters, but as technology improved, it could be done by a normal desktop computer, and now can be done by a mobile phone. The field of machine learning has continued to advance, with pockets of commercial relevance but nothing groundbreaking up until recently. Now, AI seems to have finally hit broad commercial relevance, conceptually equivalent to when engineers put hydrocarbons and aluminum together and opened up all sorts of new possibilities for flight in a stepwise fashion. Specifically, computers and software have reached a tipping point where we can apply AI to all sorts of different software tasks, and it can learn and adjust quickly to substantially impact how we work. It’s a step-change, not a gradual year-by-year compounding. Theme 2) 10x Accessibility The knee-jerk reaction to AI by investors is to invest in things that are needed to make AI, such as the AI software and the chips that they use. That trade is already rather crowded at this point. Another theme to think about is to ask, “what products will become 10x more accessible thanks to AI?” Let’s use Adobe (ADBE) as an example. Photoshop is a semi-advanced program, in that it’s not exactly rocket science to use it, but it does take a considerable amount of time to learn to use it well. This video that I originally retweeted back in May shows how Adobe Photoshop is incorporating generative AI so that the user can easily make dramatic changes in seconds with basic text-based commands. The production of art, movies, software code, engineered designs, and all sorts of complex things are made more efficient by programs that can do many of the tedious and intricate underlying details of the work for the creator. In addition to making each Photoshop user more productive and powerful (i.e. they can get a lot more done per unit of time), it lowers the barrier of entry for using Photoshop. Just like how computers used to be difficult to use but became easier over time, all sorts of creator applications that have historically been moderately difficult should become way easier. The pessimistic case is that this type of creator AI assistance will put a large percentage of creators out of work because each creator can do the work of what several creators used to do. However, that assumes that the baseline amount of work to be done won’t change much. The optimistic case is that by broadening access to this type of work, it will dramatically grow the overall usage and demand for creator applications. In other words, a similar or larger number of creators can do an order of magnitude more creation work, rather than the same amount of creation work being done by an order of magnitude fewer creators. This might be directionally similar to how, by making the cost per megabyte of data storage exponentially less expensive over the past few decades, we began using 1000x or even 1,000,000x as much storage. Imagine being able to pick up any creator application (art, video, music, programming, 3D printing, engineering software, and so forth) and through verbal or written commands, intuitively use it to make what you want with most of the annoying details abstracted away. This dramatically lowers the barrier of entry and enables the companies that make those creator applications to sell far more subscriptions to far more users. There will still be more experienced users than others, but the overall pie of who can effectively use advanced creator software at a productive level can go up dramatically. I originally made this point in my May 28th premium report. At that time, Adobe hadn’t really gained in price much while Nvidia (NVDA) had already soared in price, and so I recommended Adobe as a preferable investment within the AI trend from a risk/reward perspective. As of this writing, Adobe has now soared in price as well, and much more quickly than the broader Nasdaq 100, so I no longer view it as being at a great entry point. However, I think the overall theme of greatly expanding the accessibility of creators is one to be aware of, and I’d be interested in adding to Adobe on corrections. ADBE FASTGraph I think a longer-term trend worth taking seriously is how AI can help pharmaceutical and biotech companies. One of the clearest applications for AI is molecule simulation. In many ways, developing new pharmaceuticals is a matter of rapid guesswork and iteration, and AI can do that way faster and less expensively than humans in labs. AI in the years ahead should therefore substantially lower the costs of new drug creation, and allow drugs to come to market more quickly. Whether through new drugs or gene manipulation, if there’s a technology that can accelerate our ability to cure most types of cancers, it’s AI. The dark side of empowering creators, and bio-engineers in particular, is that AI could, for example, dramatically lower the cost to engineer a deadly new virus. I’m less worried about AI overlords taking over the world than I am about what happens when it costs 1/1000th of what it currently does for rogue human groups to create new bioweapons. Theme 3) Flip the Question Upside Down In addition to asking the question, “what will benefit from AI?” we need to ask “what will be disrupted by AI?” By process of elimination, this can lead us to a set of investments that, while they might not benefit greatly from AI, at least won’t be heavily disrupted by it. They will continue to provide good risk-adjusted returns when purchased at low valuations. AI has the capacity to disrupt all sorts of legacy-white collar businesses and repetitive tasks. Administrative overhead within organizations can be greatly streamlined. Their profit margin is AI’s opportunity to eat. Unless or until AI helps us solve some foundational physics problems to give us a world full of fusion reactors and nano-machines, however, it’ll likely to be slower to affect the physical world than the digital world. What I mostly mean by this is that the production and distribution of energy and materials remains a key limitation. Within say a ten-year time horizon, AI has all sorts of capacities to change how we interact with digital systems, but is unlikely to change energy and materials markets much, other than giving us small or moderate efficiencies around the margins (e.g. better software applications for finding resource deposits, better and faster designs for equipment, and cutting down on administrative overhead costs for energy/materials corporations). To put it another way, the field of robotics isn’t easy, especially outside of a highly-controlled manufacturing facility. Robots are complex, energy-intensive, materials-intensive, and maintenance-intensive. As a simple example, robot vacuums were invented decades ago but still aren’t ubiquitous, because they still have all sorts of limitations. Other types of robots are 5x or 10x more complex than that and thus have greater constraints when it comes to totally replacing humans. To put it tangibly, we’re a long way off until a robot can reliably drive to my house and fix my HVAC system. People have historically overestimated the speed of hardware development (supersonic planes, flying cars, robot assistants) and have underestimated the speed of software development. I suspect the same trend will continue over the next decade; software and electronics will absolutely astound us in the years ahead, and yet our physical world will remain remarkably similar to how it is now with only minor changes. I would adjust this base expectation only if I saw some major physics breakthroughs that start to suggest otherwise. Theme 4) A World Where Nobody Knows What’s Going On When humans write software programs, those programs are understood by the relevant people. But with machine learning and iteration, that’s not necessarily the case; software can become so complex that no human really knows what’s going on. AI can write code now, including for making new AI. In other words, a virtuous feedback loop can form, without humans in the loop. It’s not difficult to imagine that years or decades from now, this could get… weird. Right now, at any given time, tons of hackers are trying to breach various software systems that we rely on, and some small percentage of them succeed, which results in a lot of data breaches and other issues. Now, imagine that a group of hackers or a government agency makes an AI that is optimized for hacking. In other words, some AI programs are optimized for beating humans in chess, and some AI programs can be optimized for hacking into systems better than humans can. Early chess AI learned from games between grandmaster human players, and then could beat those grandmasters every time. Recent chess AI instead learned from playing itself over and over, and then could beat those other AIs at chess. Now, suppose that a similar approach is used for hacking. In fact I would be surprised if multiple groups were not already working on this. So, first AI learns how to hack from absorbing existing techniques, and then graduates to learning how to hack from just iterating over and over at superhuman speed and rewriting its code to get better and better. Eventually it becomes orders of magnitude better at hacking than any human, and can breach any system that humans have designed. This could be open-sourced and broadly available. So what’s the defense against that? For systems that are connected to the internet and have a lot of complexity (and thus can’t realistically be made “unhackable” as some offline or basic systems can), the answer is more AI. Similar to how the only rivals to chess programs at chess are other chess programs, we’ll probably reach a point where the only defenses against AIs optimized for hacking are AIs optimized to update and defend software systems against being hacked. Imagine a world where superhuman hackers are always and everywhere trying to breach everything, and superhuman defenders are always and everywhere trying to prevent those breaches from happening. And both the attackers and defenders are constantly iterating and updating themselves as they learn from each other, while no human really knows what’s going on because the code is more complex than they can understand and is updated more quickly than they can keep up with. In that scenario, digital gods will be constantly battling each other in cyberspace in ways we can’t even comprehend, and yet the systems we all rely on are what’s at stake in that battle. Theme 5) Inflation is a Spectrum For decades, the combination of offshoring and automation of manufacturing has pushed the cost of goods down. Shoes, textiles, electronics, plastic goods, and similar items have decreased dramatically in price compared to most other things. Price inflation has primarily been limited to services, which could not be as easily automated, and to energy-intensive assets in general. For example, a television is cheaper today than a television of similar size and resolution 20 years ago, even though the supply of currency increased substantially. Televisions became cheaper even faster than the currency was debased. Similarly, with the number of transistors per processor doubling every two years (as per Moore’s law), nothing short of nearly 50% annual money supply growth would prevent the cost per transistor from reducing over time. AI over the coming years and decades is likely to give us a lot of price disinflation in services. AI can eventually allow one accountant to do the work of several accountants, one executive assistant to do the work of several executive assistants, one graphic designer to do the work of several graphic designers, one programmer to do the work of several programmers, and so forth. It’s hard to predict how fast and widespread the impact will be, but that’s clearly the direction. We should have greater abundance ahead, in terms of creating things efficiently and running a business efficiently. The problem, however, is that the growth of money supply is likely to remain significant. Debt-to-GDP and Debt-to-Money-Supply ratios remain very high. With the way the debt-based global financial system is currently structured, it’s hard to sustain the current levels of government, corporate, and household debt without greater amounts of broad and base money. From the start of 2000 to the end of 2022, we can see a significant divergence in asset and consumer prices. The gold price increased by 8.3% per year. The broad money supply per capita grew at 6.8% per year. The price of hospital services grew by 5.3% per year. The oil price grew at 4.7% per year. The median house price grew by 4.7% per year. Childcare prices grew by 4.2% per year. Average hourly wages earnings for non-supervisory production workers went up by 3.2% per year. Official consumer price inflation averaged 2.6% per year. The average bank account had a yield of less than 2% per year. Apparel was flat in price. Electronics, plastic toys, and software of all sorts dropped in price. This chart shows the official price growth of most of those items, normalized to the start of the year 2000: Consumer Price Index Spectrum -Truly scarce things like fine art, gold, and waterfront properties (not shown on the chart) kept up with the growth of broad money supply, and surpassed official consumer price inflation by a wide margin. -Semi-scarce things we don’t know how to easily make more of such as hospital services, and childcare services, and oil, and grass-fed beef, and the median house went up a bit slower than the rate of broad money creation, but still above the rate of average consumer price inflation. -The official consumer price inflation number was weighed down by all sorts of deflationary items that were assisted by exponential technology that makes them nearly costless to produce. Plus, the consumer price index is inherently a recursive function. If the price of ribeye steak goes up and consumers shift to lower cost ground beef in response, then the basket of goods that the consumer price index is calculated from adjusts down to that new baseline. If ground beef gets too expensive and consumers shift to lower-cost poultry, then the basket of goods will be adjusted again. If poultry gets too expensive and consumers shift toward more rice and beans, then the basket of goods will be adjusted again. If butter from grass-fed cows gets too expensive and people shift toward industrially-produced soybean oil, then the basket of goods adjusts to that. Thus, highly-priced goods and services tend to filter themselves out of the basket, and low-priced goods and services tend to be emphasized. AI may exacerbate this trend. AI can make several types of services cheaper and more abundant to us, while still not really changing the scarcity of certain goods and services. We continue to need to visualize inflation as a spectrum, spread across multiple different categories. -If someone wishes to buy finite things like prime real estate, fine art, bitcoin, or the leading tech monopoly stocks, their income and net worth better keep up with the growth of broad money supply. -If someone wishes to own the median house, with a good healthcare plan and retirement savings, along with nutritionally-dense food and plenty of energy abundance, their income and net worth better surpass official CPI, even if it falls a bit short of broad money supply growth. -If someone wishes to just get by, living in multifamily housing or with their relatives, with their time spent mainly in the virtual world with cheap mass-produced processed food and with cheap mass-produced consumer goods, then they can get by with less growth of income and net worth, in line with official CPI or lower. Theme 6) The Intersection of AI and Bitcoin I think it’s worth spending a bit of extra time on this theme. It’s not necessarily the biggest theme of the six but it’s one that I’ve spent a considerable amount of time looking into due to my prior work and research within the Bitcoin ecosystem and adjacent ecosystems. Theme 6A) What is the Best Money for AI? Which money is the best form of money for AI agents to use? From what I’ve seen so far, I think bitcoin is the leading answer. In a future world with a lot of human-to-machine, machine-to-human, and machine-to-machine payments, open-source digitally-native money is inherently suitable. I wrote about this topic in my August 2022 article on the Lightning Network, after previously discussing it with Elizabeth Stark, CEO of Lightning Labs: Plus, the network is capable of doing micropayments that are much smaller than what Visa and Mastercard can do. With Lightning, you can send payments worth a fraction of a penny. This opens up new use-cases that aren’t possible with credit cards, for example, such as machine-to-machine payments, the streaming of micro-payments, or the usage of micro-payments as a spam-prevention technique. –Lyn Alden, A Look at the Lightning Network, August 2022 And then more recently in my May 28th premium report, I highlighted this example: An AI agent can spin up a bitcoin/cryptocurrency wallet, earn funds, and spend funds. AI agents are already being given bitcoin balances over Lightning, from which they can use to access services that they need to complete their assigned tasks: Lightning AI In that case, the AI agent is purposely set up by the creator to have some funds to spend as needed. However, it doesn’t take much creativity to imagine a sufficiently advanced AI agent being able to recreate this process from scratch. In other words, an AI could start with no money, and be given a task or just “decide” to do a task. As it assesses the best way to do that task, it determines that certain paid services would be helpful or critical to it. So, it generates a bitcoin/lightning wallet (unlike a bank account it doesn’t need permission to do this, since it’s open source), and either convinces its creator to give it funds or otherwise finds a way to earn funds online (maybe it posts a bunch of clever content on Nostr and earns tips). Now, with access to money, it pays for things it needs. Imagine the eeriness of finding an AI agent with money as part of some middle step along the way toward completing a task that was assigned to it, and even the creators don’t really know how it got the money. Efficient open-source digital-native money, which can be used faster, permissionlessly, and with smaller amounts (microtransactions) than existing fiat currencies, opens up more possibilities for machine-to-machine payments, machine-to-human payments, and human-to-machine payments. It is a natural type of money for AI agents to use. -May 28th, 2023 premium report That trend ended up having legs to it. At the end of June, the bitcoin companies Fedi and Stak teamed up to organize an AI hackathon, which ran through the month of July: Fedi and Stak are calling on all developers, builders, and pioneers who believe in empowering people and communities through technology: Join us and help democratise artificial intelligence with Bitcoin, Lightning, and Fedimint. Organised with the support of bitcoin and AI companies, the Ai4ALL Hackathon will run remotely on the BOLT.FUN and Replit platforms from July 1 – July 31, 2023. With a series of workshops, live coding sessions, interactive tutorials and mentorships led by key players from both ecosystems, participants can compete for prizes, collaborate with peers, and level up their skills while leveling up humanity. We’re at a critical moment in the development of generative AI that will help determine whether it is controlled by centralisers, censors, and legacy institutions, or by free and empowered people and communities. –Fedi, June 30th Those that are curious can read about the winners of the AI hackathon here. Alternatively, the Lightning Labs August newsletter focused a lot on the bitcoin+AI intersection and discussed the winners as well. In early July, Lightning Labs (a major Lightning Network infrastructure provider) announced a new set of developer tools to support this emerging bitcoin+AI intersection: LangChainBitcoin: a suite of tools that enables Langchain agents to directly interact with Bitcoin and also the Lightning Network, including two main features: LLM Agent BitcoinTools: a series of tools that leverages the newly released OpenAI GPT function calls to enable developers to create agents that can hold a Bitcoin balance (on-chain and Lightning), send/receive Bitcoin on Lightning, and also directly drive with a LND node. L402 HTTP API Traversal: a Python wrapper for a LangChain LLM chain (APIChain) that enables agents to transparently access L402 payment-metered APIs. Aperture: the latest release of Aperture, a drop-in reverse proxy server implementation of the L402 protocol which can transform any API into a pay per use resource gated via L402 functionality and sats on the Lightning Network. Now with Lightning Node Connect Support: Aperture supports any Lightning Node running litd, a binary that bundles Lightning Labs’ services alongside LND, (e.g Voltage, Umbrel, Start9, Raspiblitz, BTCPayServer) through the Lightning Node Connect protocol, an end-to-end encrypted mechanism for connecting a Lightning node to a web browser. Rich Dynamic API Endpoint Pricing: Aperture now supports the ability to optionally provide full HTTP request context to the proxied backend to enable dynamic per-call pricing. In contrast to the existing static pricing configuration, dynamic pricing allows a backend to implement custom endpoint pricing logic. L402 bLIP: a Bitcoin Lightning Improvement Proposal (bLIP) specification for the L402 protocol standard to openly share the primitives for building these components with the community. L402 is a protocol standard for payment-metered APIs explicitly designed to fit neatly into the existing HTTP 402 Payment Required status response code, which was optimistically included by the creators of the Web to enable native payments. A combination of Bitcoin, the Internet’s native currency and the Lightning Network enables globally accessible rapid, scalable micropayments for paid APIs via L402. –July 6th, Lightning Labs In plain English, Lightning Labs tools are 1) making it easier to build AI agents that can hold and spend bitcoin as part of completing their tasks, and 2) making it easier to pay for resource-intensive computation on a per-use basis using the Lightning Network and the longstanding “HTTP 402 Payment Required” status code. To see the latter in action, you can watch this two-minute video, which was shared by Lightning Labs’ CTO Olaoluwa Osuntokun. In the video, the user (Kody Low, the Head of Developer and Product Support at Fedi) has GPT-4 write code for him, and pays GPT-4 on a per-question basis (about $0.01 per question) using Lightning. The code he is asking the AI to write is ironically the very thing he is already demonstrating as a working service: the ability to pay for API calls with Lightning. More specifically, Kody asked the AI: “How do I create a reverse proxy to gate my api calls with lightning payments? Use an LND backend and write the reverse proxy for me in golang.” He then paid the Lightning invoice. The AI understood the request, received the payment, and provided well-organized and well-commented code to answer the question. Kody then asked it to rewrite the code using the rust programming language rather than golang, and paid it again. The AI rewrote the code, and this time also gave more detail in English about what certain parts of the code are doing, and clarified that for a fully-implemented version it’ll need to incorporate more extensive error case handling and other nuances, but that this is the basic outline of the program. Kody then thanked it and said AI and Lightning go great together, and paid it again. The AI said that Kody is welcome, and agreed that both AI and Lightning are powerful technologies that work well together to create cutting-edge solutions. I’ve personally tested one of these programs, where I could access ChatGPT on my phone without using my subscription and pay the equivalent of a penny or two in Lightning with one button press to ask a question, and the pricing would fluctuate dynamically per question, presumably depending on some supply/demand algorithm running in the background. For fun, I asked ChatGPT for some examples where an AI agent might need to pay humans for things as part of completing its assigned tasks. Here was its response: Great question! AI agents might require to send payments to humans as part of completing a variety of tasks. Below are some potential scenarios: Gig Economy and Microtasks: An AI might be tasked with finding someone to complete a small task, such as labeling images or transcribing audio, which often happens in gig platforms. After the task is completed, the AI could send the agreed-upon payment to the worker. E-Commerce Transactions: If an AI is managing an e-commerce platform or online marketplace, it might need to send payments to sellers or service providers. This could be in response to product sales, refunds, or service charges. Content Creators and Online Influencers: AI agents might manage platforms or channels where humans post content or influence others. They could be programmed to distribute payments or rewards to these creators based on views, likes, shares, or other metrics. Crowdsourcing and Idea Generation: Companies could use AI to manage innovation or idea generation campaigns, where people are incentivized to submit ideas. Once an idea is selected, the AI could send a payment or reward to the person. Customer Refunds: If an AI is handling customer service for a company, it may need to issue refunds for returned products, cancelled services, or compensation for any inconvenience caused by the company. Survey Participation: AI might be managing surveys for market research or other data collection purposes. Participants could be compensated for their time and inputs. Charitable Donations: AI could be used to manage a charitable organization’s payments to beneficiaries. Freelancer Payment: An AI might be employed to manage freelance or contract workers, issuing payment upon completion of work. Prizes and Rewards: If an AI is managing a contest or game, it may need to send prize money or rewards to the winners. These are just some of the many potential applications. The common thread is that any task where a human is providing a service or value that needs to be compensated could potentially be managed by an AI. I then asked it for some examples where an AI agent might need to pay another AI agent as part of completing its assigned tasks, and it answered: There are indeed many potential scenarios in which an AI agent might need to transact with another AI agent. This could occur in situations where systems are designed to interact and trade resources or services autonomously, such as: Autonomous Vehicles: In a future of self-driving cars, AI agents controlling individual vehicles might pay other AI-controlled vehicles for right-of-way or to secure parking spaces. AI Marketplaces: In an AI marketplace, one AI might pay another for specific services, such as data analysis, machine learning model training, or optimizing complex tasks. Content or Data Acquisition: An AI may pay another AI for exclusive data or content, like proprietary research, market analysis, or digital assets. Cloud Computing Resources: An AI managing a server or cloud resources might pay another AI to use its excess computational power or storage space. Digital Advertising: An AI managing an advertising campaign might pay another AI to place its ads in optimal locations or times on digital platforms. AI Mediated Auctions: In real-time bidding systems for online advertising, one AI agent may pay another for ad slots that target a specific audience group. Gaming: In online gaming scenarios, one AI may need to pay another AI for virtual goods, services, or game advancements. Internet of Things (IoT): In a smart home or city, one IoT device (managed by AI) might pay another to prioritize its tasks. For instance, a smart fridge could pay a smart oven to delay its pre-heating function until the fridge has finished its power-intensive cooling cycle. These examples are all speculative and depend on a complex infrastructure that allows for secure AI-to-AI transactions. But as technology continues to advance, we may see more systems designed to allow this kind of autonomous interaction. Now, the right skeptical question a reader of this part of this piece should ask is, “why pay in Lightning for AI usage, rather than just pay in dollars?” The answer is that Bitcoin/Lightning is uniquely well-suited to working with AI for a few reasons: No Chargebacks- Unlike many software applications, AI is very hardware intensive on a per-usage basis. When you use AI, you are using very electricity-intensive GPU hardware. The software itself is cheap once it’s developed, but each usage of that software has a substantial real-world electricity/hardware resource cost, unlike most types of software. If an API is left open to be paid via conventional means (e.g. credit cards), then the merchant opens itself up as a target for fraud. A user could spam the GPUs with thousands of uses, and then perform chargebacks on their payments. In that event, the merchant bore the cost of the electricity/hardware but then lost the payments. Lightning on the other hand is a method of bearer asset final settlement. Microtransactions-Most payment methods have limits in terms of how small individual transactions can be. You can’t pay $0.03 with a Visa card, for example. PayPal has merchant-side minimum fixed fees of $0.49 per transaction. The closest you can do is pre-fund an account and then pay micro-payments per usage within their ledger. And this all gets more complicated globally and across borders and currencies; if someone in Malaysia wants to pay someone in California for some AI usage, what payment method should they use? Even within my subscription payment backend on my website, my operations manager and I sometimes have issues accepting foreign credit cards, and in many countries they use mobile pay services rather than credit cards anyway. Lightning can handle microtransactions globally. A user can pay a few cents per usage from anywhere to anywhere, without subscriptions and without pre-funding. And this is highly relevant for machine-to-machine payments, where subscriptions may not be ideal and they just need to interact with each other on a real-time basis. Permissionless and Programmable- AI agents can be spun up to do all sorts of things. As an entity with no legal identity, it is easier for an AI agent to hold and use open-source programmable money than for them to use a bank account or bank payment product. An AI agent might not be able to get a bank account, but they can run a Lightning node, or operate a Lightning wallet. And the programmable nature of Lightning, along with its real-time instant aspect, allows for dynamic pricing. In other words, the price per API call can be programmed to vary from second to second depending on how many incoming API requests they are getting at the same time. Theme 6B) Proof-of-Work as Spam Prevention Now that artificial intelligence is making the creation of pictures, videos, texts, programs, and other things almost costless, it’s becoming harder to know which content is genuine compared to which content is a high-quality fake video or personality. When we see a picture or a video that might be important from a corporate, social, political, or geopolitical perspective, how can we know that it is actual footage rather than an AI-generated digital forgery? Now imagine what this will be like in five or ten years as the technology improves. And perhaps more concerning is the convincingness of bots. Whenever I use Twitter (now known as “X”), my posts get flooded with bots. Luckily, they are low-quality, badly-programmed obvious bots. However, there are also human impersonators that manually spin up an account that resembles mine, and begin trying to phish people for scams. What if we get to a point where AI bots on social media are actually quite convincing? And what if they can exist in uncountably high numbers, constantly being created as fast as they can be identified and blocked? In addition to spamming people with bots as a form of fraud, a sophisticated bot strategy might involve building an entire fake community to promote some idea and make it seem popular. Tons of different artificial personalities might be able to operate online, and be convincing to many people. Imagine a scenario where someone writes a program that creates an army of bots, each one unique and each one several years more advanced than current ChatGPT, and they coordinate to promote some political candidate, some social idea, or otherwise to just boost the perceived audience of some influencer. And now imagine that this program is open-sourced and exists for anyone that wants to use it. Pandora’s Box is opened. Imagine interacting online at a time when 1) the majority of human-seeming entities operating online are not human and 2) it’s very hard to determine which ones actually are human. That’s a weird environment to navigate. One way to handle that is for social media platforms to enforce an identity check during sign-up. In this context, new entrants will need to provide considerable evidence that they are a real person. Elon Musk’s Twitter/X seems to be moving in that direction, as have some other platforms. While that might indeed work for certain platforms, the obvious downside of this approach is that anonymity is an important part of safety and truth in many cases. People with certain professions that can’t comfortably be themselves online use anonymous accounts to relax on. More gravely, in many countries certain types of speech are illegal. Leaders can’t openly be criticized, certain topics are illegal to talk about, and so forth. In fact, several of the founding fathers of the United States originally used pseudonyms for protection when publishing their pre-revolution critiques of the U.K. monarchy. Authoritarians love forced identity checks to participate in social media, and a rampant bot problem would give them a great excuse to implement them. And they can easily coerce various platforms to hand over the data to them. The other approach is that social media can remain open and anonymous, but some sort of proof-of-continuity or payment check can be used to prevent a flood of countless bots. Back in the 1990s, the cryptographer Adam Back invented proof-of-work in the form of Hashcash to defend against various types of spam or denial-of-service attacks. It was eventually used by Satoshi Nakamoto in the heart of the Bitcoin protocol, but hasn’t otherwise been widely implemented in social media or other parts of the internet. Bitcoin’s ledger has a practically unforgeable history to it. Energy is required to update the ledger with each block on the blockchain, and the full history of the ledger has a tremendous amount of energy encoded into it, which makes it very hard to forge. In addition to being useable as a monetary asset, the Bitcoin blockchain now exists as a sort of global open-source immutable time capsule, where arbitrary data (including documents, pictures, books, songs, or timestamps from other systems) can be inserted into it for a cost, and it’s provable that what was inserted into it hasn’t been altered since that date. No closed-source, opaque, and centralized server can replicate this feature. Could Bitcoin’s proof-of-work be used to secure other types of systems against spam, in line with what Adam Back originally created Hashcash for? There are some avenues that have been explored along those lines, which might become increasingly relevant in the years ahead. Payments: Bitcoin is currently used by Stacker News as both a posting incentive and as a form of spam control. Users have to pay a tiny amount of fractional bitcoin to post, and can tip each other with fractional bitcoin. The problem with traditional payments like Visa is that they identify the person, whereas bitcoin is proof-of-payment but doesn’t require the identity to be known, which is fine for spam prevention. This could be tailored in various ways. If Twitter/X were to implement a similar feature, then maybe for example I could have a setting that allows only people who have some type of micro-cost associated with their identity to comment on my posts, in order to prevent spam. Identity with a Cost: Someone can create a public/private key pair (like on Nostr), but then go a step further and embed the public key into the Bitcoin blockchain with a real-world resource cost due to transaction fees, verify to people in various ways that this key is indeed theirs, and use their private key that goes along with that public key to sign any digital content that they create or authorize. The same can be true for any large journalist organization with known integrity standards. Platforms could check to see which key pairs are embedded or not. From that point on, this allows people to differentiate forgeries from genuine content (at least to the extent that it was created by that individual or organization and hasn’t been altered since publication, not necessarily that it’s objectively true), and it makes it so that a million impersonators cannot use similar key pairs that have been embedded, with real-world expense, into the Bitcoin blockchain. Although we’re not quite at the stage yet, in a world of sufficiently advanced artificial intelligence and virtually costless impersonation or forgeries, we might learn to demand proof-of-continuity (cryptographic public/private key pairs) and proof-of-work (evidence that real-world resources were expended to add weight to a given public/private key pair or online identity), before taking certain types of digital content seriously. In that potential future world that I previously described where 1) the majority of human-seeming entities operating online are not human and 2) it’s very hard to determine which ones actually are human, then we would need to adapt various way to sort out which entities have some sort of real-world tangibility or cost associated with them, to find signal through the near-infinite ocean of noise. Identity checks or the usage of a real-world resource cost are the two main directions that platforms can take to deal with that. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
06/17/2023 | AR fashion try-on company ZERO10 to unveil prototype AR Store at Viva Technology in Paris this week. |
AR fashion try-on company ZERO10 to unveil prototype AR Store at Viva Technology in Paris this week. After the successful showing of its AR try-on technology at the Metaverse Fashion Week, AR fashion company ZERO10 is launching its first prototype AR Store that aims to bring a whole new experience to shopping. Following previous successful collaborations with fashion brands like Tommy Hilfiger and Coach – which saw the launch of the brand’s AR Mirror and Storefront, respectively – ZERO10 is now debuting the AR Mirror as a standalone store. AR Storefront ZERO10 “We believe that 50% of physical retail will incorporate AR solutions or will become AR stores itself in the next 10 years,” said CEO of ZERO10 George Yashin. “Customers want to try new types of shopping and augmented reality will elevate the future in-store experience. We see the technology as a powerful tool for retail and we are still at the beginning.” The prototype AR Store aims to be an immersive solution for retail stores while also addressing the common issues that come with brick-and-mortar establishments. This iteration of the AR Mirror is particularly notable for its ability to deliver a standalone retail experience, in contrast to simply enhancing the experience of shopping in a traditional store. AR Store for a Full AR Shopping Experience ZERO10 has been steadily growing with its AR-to-physical capabilities, befitting a fashion company that seeks to improve everything about the retail experience. Already a thriving hub for forward-thinking fashion designers, the AR Store represents the exciting possibilities of a full augmented reality shopping experience. With a simple setup (requiring only the AR Mirror, an iPad stand, a statistics screen, and storage space), the prototype AR Store promises to deliver a seamless shopping experience without the need for large floor space. With its minimal requirements for both space and construction, the AR Store will be able to deliver all the functions and operations seen in a traditional store with an average of 100 sq meters, all with just two square meters of total store space. AR Mirror ZERO10 All of this is possible thanks to ZERO10’s proprietary AR Mirror solution, which can run complex computer vision models and render clothing in 4K resolution. It does all of this in real-time, powered by a supercomputer that boasts the latest in 3D body tracking and multi-class segmentation capabilities. The Future Of Brick-And-Mortar Stores? Traditional storefronts may no longer be enough by themselves to entice consumers to shop at their establishments. Many retailers have started embracing the capabilities of augmented reality and other similar technologies to enrich the customer experience and help them stand out compared to their peers. This phenomenon (tentatively called v-commerce) not only enriches the usual in-store experience but solves many of the issues that brick-and-mortar establishments face. Storage space, floor traffic, and even employee tasks become far easier to manage with the help of AR technology. While it’s difficult to say that AR solutions like AR Mirrors and ZERO10’s AR Store will render brick-and-mortar establishments obsolete, it’s fair to say that they’re a development in the retail shopping experience that can’t be ignored. Forward-Facing Fashion at Viva Technology Europe ZERO10’s AR Store prototype will be interactable for the first time at Viva Technology, Europe’s biggest startup and tech event, taking place June 14- 17, in Paris, France. The prototype will be located in Innovation Park, ALLÉE NORD, Digital Experiences and AI, stand J61-007. See Also: From Virtual Clothes to Virtual Stores: XR in Fashion Retail Visitors at Viva Technology will be able to experience the entire customer journey of the AR Store, starting with trying on the digital merch ZERO10 created together with Viva Technology, also available physically with limited stock. Following the virtual try-on with the AR Mirror, customers will be able to initiate the imitation of a real purchase, where they will be able to select their sizing and receive a free physical analog of their selected item at the reception desk. With this, ZERO10 hopes to make more people see the possibilities of AR retail experiences and use AR technology to enrich the normal fashion shopping experience. |
Consumer Discretionary | Consumer Discretionary: Textiles, Apparel & Luxury Goods | Apparel and Personal Accessories | view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
06/14/2023 |
Streaming TV slow motion train wreck/dumpster fire/shit show...use your own colorful description. |
I've been reading for at least a year now how the streaming TV model is broken and why. A great article surfaced this week again and got me thinking how streaming TV might right the ship. The industry is classically suffering from too much supply, high costs to produce, and too little revenue. Just considering my situation, where we cut the cable cord maybe 5 years ago, and at the time were paying around $130 per month for the usual cable/sattelite package. Streaming we might pay for 3 services a month, so our cost is less than $50. We finish binge watching, cancel, then reup 3-6 months later or whenever there are new shows that are worth watching. I think that defines most other people as well. So how does the industry fix this and what does it mean for consumer brands? Some thoughts.
Streaming TV is targeted so I foresee the potential for consumer brands to use streaming TV advertising that is highly targeted versus mass market like it use to be for network TV advertising. Couple that with using AI to create ads, each which can be tailored to each person, which means tv ad costs go to nothing. I remember 10 years ago creating 30 second sports for $50K. With AI, I can see creating hundreds or thousands of ads for a fraction of that. I don't exactly know what that is now because the capabilities are not quite there yet (2026 estimated at time of this writing), but what if it is a few thousand dollars? This could mean great ad opportunities for brands, although if the cost to produce is low, which means almost anyone can do it, will that skyrocket costs for ad inventory demand? I don't know. The point is that consumer brands should be looking for these opportunities and how to leverage when we get to the point where there is a targeted streaming TV model after the industry rights the ship and AI can produce ads. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
06/13/2023 | 12 Ways Web3 Media Could Embrace AI |
12 Ways Web3 Media Could Embrace AI From chatbots to deep blockchain data analysis, artificial intelligence can help Web3 news organizations function. But there are plenty of pitfalls too. By Jeff Wilser AccessTimeIconJun 7, 2023 at 1:08 p.m. MDT CoinDesk - Unknown (We Are/Getty Images) UP This article was not written by AI. But will that be the case in the future? As questions about artificial intelligence dominate every industry -- from crypto to law to farming -- it’s only natural that newsrooms, too, are weighing the promise and the peril. “Artificial intelligence will soon transform media on a scale and pace that rivals the internet two decades ago,” predicts a recent Axios newsletter. “The media companies that survive -- and thrive -- will be those that adapt quickly to fast-changing consumer needs.” A D V E R T I S E M E N T A D V E R T I S E M E N T Read more: Michael J. Casey - Why Web3 and the AI-Internet Belong Together Which brings us to the spirit of this exercise: How could a savvy media website -- specifically a Web3-focused publication, such as CoinDesk -- smartly use AI in the future? Will us fleshy organic humans, with our pesky needs for sleep and food and paychecks, soon be replaced by machines? And is it worth even opening this Pandora’s box? “I think there’s a first-order question for newsrooms to ask themselves, which is, ‘Why are we even using these tools in the first place? Is there a need?’” says Claire Leibowicz, head of the AI and Media Integrity Program at The Partnership on AI, a non-profit organization devoted to the mandate of responsible AI. Leibowicz’s team concluded that ultimately, yes, AI can be leveraged as a tool to support sustainable journalism…with some important caveats. It could give a boost to small local newsrooms, for example, which might otherwise be headed for extinction. This doesn’t mean that AI should replace the human writers and editors. (I’ll cop to some bias. Full disclosure: I am a human writer.) Let’s start with the range of journalistic functions that Leibowicz says AI should not be doing, which includes: Editorial decision-making, story curation, interviewing, and the writing of high-stakes stories. “We should take a step back and consider which stories are high stakes,” says Leibowicz. “Any story can be high-stakes, whether it’s pop culture or public health, and there should always be a human in the loop.” A D V E R T I S E M E N T A D V E R T I S E M E N T That’s the theme from every expert I interviewed: AI can be an additive tool, but not a substitute for humans. There is a very real risk, of course, that AI could gobble up jobs anyway. It would be naive to ignore the possibility. But on a more optimistic note, Zain Kahn, who writes the AI-focused Superhuman newsletter, says that AI is “like having a very competent intern. It saves you from the work that you find to be drudgery anyways.” The Partnership on AI has grouped the role for AI in newsrooms into three buckets, which we’ll adopt as a framework for this piece: i. Lead Generation and Investigative Tools ii. Content Creation and Distribution iii. Audience Engagement It’s early days for AI. (Everyone in crypto, of course, is familiar with the theme of “early days.”) There will surely be more use cases. And this list is hardly comprehensive. So with all of those disclaimers in mind, here are 12 ways that Robot Journos could help a Web3 newsroom: Bucket I - Lead Generation and Investigative Tools 1. Spot trends and insights from large pools of data Even the speediest of readers have trouble poring through 5,000 pages of financial documents, court papers, or on-chain transactions. AI can help with this, and it’s already happening. “A classic example is the Pandora Papers,” says Leibowicz. Journalists used machine learning to help make sense of 11.9 million documents containing information on the secret offshore accounts of presidents and billionaires, which they used to break a Pulitzer-winning story. A D V E R T I S E M E N T A D V E R T I S E M E N T 2. Analyze social media trends The question of “what to cover?” is at the heart of every newsroom. AI could suss out what the crypto community is most interested in. “There’s a world where AI tools are analyzing social media and trending topics, and we think both of those are vital,” says Leibowicz. But she adds a note of caution -- trending topics might actually mask the more important stories that no one’s talking about. “You don’t want to overlook these insights in favor of an algorithmic tool that’s deriving its data from the more macro picture,” she says. Put differently, it’s the newsroom’s job to not simply react to what people are chattering about, but to seek out and report on the stories that people will be talking about. (See: CoinDesk breaking the FTX story.) 3. Help brainstorm story ideas and angles While AI shouldn’t replace human brainstorming, it could be used as a jumping off point. Maybe it could get the creative juices flowing? “Why not have a fallible but potentially creative thought partner in coming up with lead generation ideas?” says Leibowicz. SingleQuoteLightGreenSingleQuoteLightGreen This article was not written by AI. But will that be the case in the future? SingleQuoteLightGreenSingleQuoteLightGreen Kahn, of the Superhuman newsletter, gets even more concrete. “Let’s say you want to write a piece and you’re not sure what the angle is,” says Kahn. “You can say, ‘Give me a list of 10 article ideas based off the text below.’” Many of those will be garbage, but only one needs to be useful. Bucket II: Content Creation and Distribution 4. Write (low stakes) automated news articles This is tricky and controversial. The consensus from the AI experts, generally, was that human beings should stick to actually writing the news. But Leibowicz suggests that in addition to the stories that would already be written, AI could be used to knock out “lower stakes” stories that would otherwise fall through the cracks. This is already happening. Leibowicz says the BBC used ChatGPT to write “7,000 or so hyper-localized news stories on British shopping trends,” which is an example of creating “stories that might otherwise be overlooked.” A D V E R T I S E M E N T A D V E R T I S E M E N T Or maybe a local newsroom doesn’t have the bandwidth to cover every City Council meeting, but AI could scrape the transcripts and whip up some quick summaries. It’s easy to see the application in Web3. There are hundreds if not thousands of projects, conferences, and meet-ups that aren’t being covered by crypto media -- AI could help plug the gaps. Read more: Michael J. Casey - What AI Governance Can Learn From Crypto’s Decentralization Ethos Of course, what counts as “low stakes” vs. “high stakes”? Does AI have the expertise to understand this rapidly evolving field? Can AI differentiate between the authorities and the hucksters? None of these questions are easy. 5. Quickly translate technical papers for a mainstream audience AI is good at analyzing dense, weedy academic papers and plucking out the insights. Kahn suggests feeding ChatGPT a long intimidating document -- a blockchain white paper, for example -- and then asking it for 20 key insights. Humans, ideally, would then vet this for accuracy and coherency. 6. Summarize crypto-adjacent news “I don’t think you’re going to get replacements for reporters,” says Nathaniel Whittemore, who now hosts a daily AI podcast as well as his daily Web3 podcast. He imagines that traditional journalism -- with human interviews, reporting, and a dose of skepticism -- will help publications differentiate themselves from AI-reliant content farms. A D V E R T I S E M E N T A D V E R T I S E M E N T That being said, Whittemore imagines that in the Web3 space specifically, AI could be used to quickly summarize crypto-adjacent news. “CoinDesk is the destination website for a lot of people in Web3,” says Whittemore. But CoinDesk is not the destination for all business or financial news. So for areas outside of its core competence, why not use AI as a service to the readers? What happens to the S&P 500 is not exactly CoinDesk’s bailiwick, but it’s of interest to most in the space. So Bloomberg or Wall Street Journal articles could be summarized and then linked to by AI, says Whittemore, as these are “sort of orthogonal to their interests, but relevant.” 7. Extract insights from long videos There are already plug-ins for ChatGPT that allow you to instantly summarize and synthesize videos. Those will only get better. “Let’s say there’s a Bitcoin conference, and some dude’s yapping for two hours,” says Kahn. “Don’t watch the full two hours. Plug it in, and within a couple of minutes, you can get the whole transcript, or a few bullet points.” And if there’s one thing the entire space can agree on, it’s that there’s a lot of yapping at crypto conferences. Bucket III: Audience Engagement 8. Create AI chatbots Francesco Rulli, an Italian businessman and philanthropist, has already used AI chatbots to quickly scale education platforms for young Afghanistan women. (Teaching financial empowerment, for example, when they lack that education at home.) He suspects that a Web3 website could do something similar. Rulli imagines that the AI could be trained on an accepted body of Web3 knowledge -- an archive of CoinDesk explainers, for example -- and could then serve as a dynamic guide for those learning about crypto. “You have a digital assistant that helps you to navigate,” says Rulli, “just like Virgil did with Dante in the inferno.” A D V E R T I S E M E N T A D V E R T I S E M E N T SingleQuoteLightGreenSingleQuoteLightGreen What if instead of reading an article, you chat with a bot who tells you about the news? SingleQuoteLightGreenSingleQuoteLightGreen Then there’s the wilder idea. Rulli envisions an AI chatbot that’s primarily about crypto education. But if we expand the idea further, perhaps chatbots could even change the very nature of news itself. What if instead of reading an article, you chat with a bot who tells you about the news? Getting even more out there, perhaps the AI Newsbot is able to hoover up data and insights and facts and stories, and dynamically answer any questions you might have about what’s happening in the world. This one is obviously more speculative. Back to the shorter-term utilities… 9. Facilitate better reader comments and interactions Comments are the cesspool of the internet. AI could help clean this up. “The New York Times relies on something called Perspective API,” says Leibowicz, which uses machine learning to rank comments based on an automatic (if fallible) detection of how toxic they are. 10. Quickly transform content to publish across other platforms Every newsroom struggles to keep up with the changing whims of content platforms -- one day Facebook is hot, the next day it’s Snapchat, the next day it’s Tik-Tok, soon it will be that trendy hologram start-up. AI could easily help publish across platforms. “You as a journalist can write this story, but maybe there are other modes of storytelling that can reach a different audience,” says Leibowicz. With a few clicks you could transform a 1,000-word article into a punchy Tik-Tok video. 11. Scale and automate social media A D V E R T I S E M E N T A D V E R T I S E M E N T Once an article is finished, every editor I know dreads this lingering chore: Think of “call to action” tweets. This is not why anyone went to journalism school. Here’s where AI shines. Kahn says that AI is already a powerful tool for quickly creating graphics and prompts for social media. Feed it an article, then ask it for 10 Twitter prompts. In the future, you’ll likely be able to automate the entire flow. 12. Customize the news reading experience “The AI can actually match interests with information, and information with interests,” says Rulli. For example, he notes that he is Italian, so naturally he’s more interested in news from Italy. The AI could learn readers’ preferences and tailor the content to your appetites. Are you into NFTs? Great, then you’ll get more of that. Bitcoin maximalist? Then more bitcoin and Lightning network and Michael Saylor. Final thoughts The potential is dizzying. So are the risks and the caveats, such as: Concerns about data privacy, bias in AI, inaccuracy and hallucinations, an over-reliance on AI, security, the loss of a personal touch, legal murkiness, and ethical issues. Recommended for you: Why Bitcoin Mining Is a Matter of National Security Ethereum’s Mainnet Tenth 'Shadow Fork' Goes Live Ahead of September Merge Blockchain Bridge Wormhole Suffers Possible Exploit Worth Over $326M Many of these disclaimers were suggested by our final AI expert, who adds a note of caution: “As always, it's important to carefully consider the implications of AI integration and to use AI responsibly,” says the expert. “AI can be a powerful tool for improving efficiency and user experience, but it should be used in a way that respects user privacy, maintains accuracy, and upholds the site's values.” This AI expert, of course, is ChatGPT. Edited by Ben Schiller. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
06/09/2023 |
Google DeepMind’s game-playing AI just found another way to make code faster. The news: A year after DeepMind used a version of its game-playing AI AlphaZero to find new ways to speed up the calculation of a crucial piece of math, the AI research lab has pulled the same trick again—twice. Using a new version of AlphaZero called AlphaDev, it has discovered how to sort items in a list up to 70% faster than the best existing method, and also found a way to speed up a key algorithm used in cryptography by 30%. Why it matters: These algorithms are among the most common building blocks in software. Small speed-ups can make a huge difference, cutting costs and saving energy. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
06/08/2023 |
This AI-for-fashion lets you launch and run a clothing line with just a prompt |
This AI-for-fashion lets you launch and run a clothing line with just a promptCALA’s technology uses AI tools to design, manufacture, and ship garments, making it easier for independent designers to start their own labels.[Photo: KidSuper] BY ELIZABETH SEGRAN7 MINUTE READ For years, I’ve felt a gap in my wardrobe in the months where the cold just begins to creep in here in Boston—a time when you don’t need a heavy duty puffer but you’re not yet ready for a lightweight spring raincoat. In my mind, I’ve been dreaming up the perfect outerwear: It’s a cozy, high-quality oatmeal colored wool that’s cut a little like a trench coat, but it’s shorter, hitting the thigh. It would look put together but also provide the correct amount of warmth. But even though I’ve been searching for it for years, I’ve never found it. ![]() This year, I had an opportunity to design a garment from scratch, so I took the opportunity to create this dream jacket. I used a platform called CALA, which was founded in 2016 and deploys AI and machine learning to streamline the entire fashion supply chain, from brainstorming designs to manufacturing products and pricing them. The company currently has more than 40 brands and independent designers using the platform, including NBA teams and the designer KidSuper. BRAINSTORMING WITH AIThis service offers a glimpse into how new technologies could make it radically easier for designers to launch fashion brands, by simplifying some of the most complex aspects of making garments. “We have the technology to spin out a Shopify store or build an audience on Instagram,” says Andrew Wyatt, CALA’s founder and CEO. “But it was still complicated to create a physical product. We’re trying to solve for that.” ![]() This year, CALA updated its technology to incorporate DALL-E, OpenAI’s system that creates realistic images using natural language descriptions. (OpenAI is also the maker of ChatGPT.) Prior to this, you could upload images of products you were inspired by, creating a kind of “mood board” for your collection. Now, however, you can just use words to describe what you’re looking for. I began writing out details about the jacket I wanted: “oatmeal-colored,” “thigh-length,” “wool,” “trench coat.” In a click, these descriptions generated six images of a jacket that came close to what I was looking for. None of them were exactly like what I had in mind: One looked too much like a cardigan, the others weren’t the right length. So, the next step was to pick the jacket that came closest to the one I had in mind and ask CALA to keep generating more images like it. I had a fun afternoon doing this, getting closer and closer to my ideal jacket. The platform makes it easy to work with other people on these designs. You can make notes on each image, then flag other members of your team who can comment on the design, which might help you iterate further on it. And interestingly, the images helped me refine the details of the jacket I wanted. I saw versions of the jacket I hadn’t imagined before but that I actually kind of liked, including ones with big buttons, cuffs at the wrist, and large, exterior pockets, so I added those descriptions to my prompt. By the end of the session, I had not only designed the jacket I had always wanted—but the AI also prompted me to brainstorm and refine my thinking. “The more different ideas you can generate, the higher the probability you’re going to edit down to something that’s wonderful,” says Wyatt. The DALL-E image-generation process mimics what happens in design studios. Many fashion brands hire lots of designers who are tasked with sketching out dozens of designs based on a few prompts. Then, the creative director looks through the ideas, picking ones that fit his or her vision, then refine the sketches from there, until they get to the look they want for the season. “Historically designers had to be able to do beautiful hand drawings,” Wyatt says. “Alexander McQueen and Karl Lagerfeld’s sketches are iconic. But fast forward to Virgil and Kanye, they’re taking photos of things they are interested in, sending them by WhatsApp to their design teams, who then do a mockup in Adobe Illustrator, and kick it back to them. They kind of riff back and forth that way.” Wyatt’s goal was to reduce the manual labor of creating those sketches, most of which will be discarded. “We found that some fashion brands were having designers pump out 90 design a week,” he says. “With this AI model, one person could easily generate 90 ideas in one afternoon, and quickly iterate on each one.” This technology, if it becomes widespread, could disrupt the fashion business. Automating so many aspects of the design process could result in brands cutting down on the number of designers they need to do these quick sketches in design rooms. In turn, this could mean fewer jobs for junior designers, who rely on these jobs as they leave design school and begin building their résumé. But Wyatt believes that it could also give young designers the ability to have more autonomy by launching their own small businesses, rather than doing years of apprenticeships at bigger companies. ![]() FROM IDEA TO GARMENTUsing AI as a brainstorming tool is certainly helpful. Many people in various fields find that ChatGPT is useful for coming up with ideas. But being able to transform an AI-generated image into a physical garment is a much more complicated proposition—and that’s precisely what CALA allows you to do. Once you’ve settled on a design that you like, you can begin to turn it into a reality. CALA has relationships with 20 factories in 10 countries around the world, including Vietnam, China, Turkey, Italy, and Portugal. Based on the particular product the user wants to make, the system will determine which factory is a best fit, and incorporate all the information into its pricing algorithm. Products can be turned around in as little as 40 days, if you opt for expedited shipping. “During the pandemic, when one country shut down its production, we were able to send orders to a different country easily, so brands didn’t have delays,” says Wyatt. ADVERTISEMENT With its quick turnaround times, this system could make fast fashion business models more accessible to other brands. This could add to the damage quickly churning out trendy garments is bad for the planet. While CALA doesn’t actively steer brands away from this approach, Wyatt says that it offers tools that allow for more eco-friendly business models as well. The less expensive shipping options take several months to be delivered, and brands can also express a preference to use more sustainable fabrics—like recycled cotton or polyester—as they put their collections together. The first step is to use the platform’s pricing tool to see how much it would cost to make it. CALA’s algorithms can determine pricing based on how many units you want, incorporating the price of materials, labor, shipping, and even taxes. Then, if you’re happy the suggested price, you can send the designs off to be manufactured. Once your design is in the factory, the factory managers there will get added to your team and keep you updated about how things are going. They might send you images of prototypes so you can make adjustments, or let you know that the quality control has been done. “We’ve trained all of our factories in our technology,” Wyatt says. “A lot of the people on the ground we work with are Gen Zs and Millennials who are English speaking, which makes the communication much easier.” Then, when the clothes are ready, they are shipped to the brand’s warehouse, ready to be sent to customers. CALA will even produce images of each garment so it is ready to upload to an e-commerce site. ![]() THE FUTURE OF FASHIONCALA streamlines the complex fashion supply chain, allowing designers and brands to create products more easily. Traditionally, fashion brands need to hire a range of specialists from designers to pricing experts to supply chain managers, all of whom are responsible for one part of the process. Wyatt’s goal was to lower the barrier to entry for designers to launch brands. Today, CALA works with a range of fashion players. Some are small designers eager to get their brands off the ground and start selling collections of their designs. But CALA also works with much larger companies. An enterprise version of the technology allows companies to tailor it to their needs. Some use it just for brainstorming and designing. Others want to be able to send designs to their own factories, so the system is adapted to onboard new manufacturers. CALA charges a $100 monthly fee, along with 15% of the cost of manufacturing, to use its platform. (A simpler version of the platform, that allows only two collaborators, is free.) But the whole system reveals how technology can make even the most complicated industries more accessible. Now, designers can focus on what they do best—creating compelling and expressive clothing—and leave CALA to worry about sourcing. “Our whole purpose as a company is to power physical product creation,” says Wyatt. “And when new technology becomes available, we can integrate it, so everyone on the platform can benefit from that.” |
Consumer Discretionary | Consumer Discretionary: Textiles, Apparel & Luxury Goods | Apparel and Personal Accessories | view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
06/06/2023 | As the AI industry booms, what toll will it take on the environment? |
As the AI industry booms, what toll will it take on the environment? Tech companies remain secretive over the amount of energy and water it takes to train their complex programs and models Maanvi Singh @maanvissingh Thu 8 Jun 2023 10.00 EDT One question that ChatGPT can’t quite answer: how much energy do you consume? “As an AI language model, I don’t have a physical presence or directly consume energy,” it’ll say, or: “The energy consumption associated with my operations is primarily related to the servers and infrastructure used to host and run the model.” illustration of face being created from blue streams of digits Robot takeover? Not quite. Here’s what AI doomsday would look like Read more Google’s Bard is even more audacious. “My carbon footprint is zero,” it claims. Asked about the energy that is consumed in its creation and training, it responds: “not publicly known”. AI programs can seem incorporeal. But they are powered by networks of servers in data centers around the world, which require large amounts of energy to power and large volumes of water to keep cool. Because AI programs are so complex, they require more energy than other forms of computing. But the trouble is – it’s incredibly hard to nail down exactly how much. As they compete to build ever-more sophisticated AI models, companies like OpenAI – which created ChatGPT – Google and Microsoft will not disclose just how much electricity and water it takes to train and run their AI models, what sources of energy power their data centers, or even where some of their data centers are. Meta, the parent company of Facebook, for example, last year unveiled that it was building what it believed to be the world’s fastest supercomputer, called the AI Research SuperCluster (RSC). But it wouldn’t reveal where the supercomputer was located or how it was being powered. Now, as the tech industry rushes to incorporate generative AI into virtually everything – from email and search to food delivery apps and mental health services – industry experts and researchers warn that the technology’s unchecked growth could come at a significant environmental cost. “This exponential use of AI brings with it the need for more and more energy,” said Sasha Luccioni, the climate lead for the AI company Hugging Face. “And yet we’re seeing this shift of people using generative AI models just because they feel like they should, without sustainability being taken into account.” Amazon data centers are seen in Manassas, Virginia. Amazon data centers are seen in Manassas, Virginia. Photograph: Shuran Huang/For The Guardian Luccioni is one of a handful of researchers who have tried to assess the emissions generated in the creation of specific AI models. In a research paper that has not yet been peer-reviewed, she and her co-authors tallied the amount of energy used to train Hugging Face’s own large language model, Bloom, on a supercomputer; the energy used to manufacture the supercomputer’s hardware and maintain its infrastructure; and the electricity used to run the program once it launched. They found that it generated about 50 metric tons of carbon dioxide emissions, the equivalent of around 60 flights between London and New York. Bloom’s energy footprint is lower than those of other generative AI programs, Luccioni and her team estimate, since Bloom’s supercomputers are powered by nuclear energy, which doesn’t produce carbon emissions. By contrast, limited publicly available data suggests about 500 metric tonnes of CO2 were produced just in the training of ChatGPT’s GPT3 model – the equivalent of over a million miles driven by average gasoline-powered cars, the researchers noted. “For ChatGPT’s latest model, GPT4, [OpenAI] hasn’t said anything about either how long it’s been trained, where it’s trained, or anything at all about the data they’re using,” said Luccioni. “So essentially, it means it’s impossible to estimate emissions.” Meanwhile, newer AI models are getting bigger – and more energy intensive. Bigger models require the use of more and more powerful graphics processing units (GPUs), and take longer to train – using up more resources and energy, Luccioni said. Even more unclear is the amount of water consumed in the creation and use of various AI models. Data centers use water in evaporative cooling systems to keep equipment from overheating. One non-peer-reviewed study, led by researchers at UC Riverside, estimates that training GPT3 in Microsoft’s state-of-the-art US data centers could potentially have consumed 700,000 liters (184,920.45 gallons) of freshwater. We’re seeing this shift of people using generative AI models just because they feel like they should, without sustainability being taken into account Sasha Luccioni In the absence of accurate, public data, the researchers had to assume the “water use effectiveness”, or the ratio of energy a data centre uses and the water used to keep it cooled and functioning, based on Microsoft’s self-reported average. The actual number of liters used could vary significantly based on where and when exactly GPT-3 was trained – in searing Arizona, lots of water would be needed to keep servers from overheating, whereas in Wyoming, a centre might use less water. The design of the specific data centers could also wildly affect numbers. Rather than using water-intensive evaporative cooling systems, a center might use traditional air conditioning – which uses less water, but more electricity. Google became the first tech giant to publicize its water usage worldwide, but provided average figures that concealed important details about the local impacts of its data centers. After a protracted legal battle with the Oregonian, the city of Dalles, Oregon, released data showing that Google data centers used a quarter of the town’s water supply. Because an AI project’s water use effectiveness could be used to guess at its compute capacity, companies want to keep their water use a secret, said Shaolei Ren, an associate professor of electrical and computer engineering at UC Riverside. “They want to give us as little information as possible,” he said. In general, companies have tended to build data centers where power is cheap. As big tech firms like Google and Microsoft strive for a net-zero emissions target, they may be especially motivated to build in areas where solar or wind power are plentiful – like Arizona – but water is scarce. Meta and OpenAI did not respond to the Guardian’s requests for comment. Google and Microsoft declined to provide an on-the-record response. When top executives from leading AI companies called for regulation to prevent the “existential risk” posed by AI, it spurred speculation on the threats that superintelligence posed to society. But researchers warned that one of the more immediate, pertinent risks was environmental. If companies are more transparent about the natural resources used and carbon emissions released in the creation and use of AI models, they could help open up discussions about when and how to strategically use generative AI, said Luccioni. It may be worth the environmental cost to use generative AI tech in cancer treatment, but a waste to use it in other cases. And yet, generative AI has become a fixation. “There’s this idea that your company is passé if you’re not using it,” said Luccioni. A couple of months ago, OpenAI offered paid access to incorporate ChatGPT into their apps, and companies including Instacart, the online grocery delivery company, are using the feature to customize grocery lists and ingredient recommendations. And last month, Google announced that it would be incorporating generative AI into gmail and search – using exponentially more complex and energy-intensive technology to accomplish essentially the same tasks. Companies have suggested using similar tools for bank fraud detection, dispute statistical models that are already pretty good at detection. “It’s frustrating because actually there are so many low-impact, efficient AI approaches and methods that people have developed over the years, but people want to use generative AI for everything,” said Luccioni. “It’s like using a microscope to hammer in a nail – it might do the job but that’s not really what this tool is meant for.” I hope you appreciated this article. Before you move on, I was hoping you would consider taking the step of supporting the Guardian’s journalism. From Elon Musk to Rupert Murdoch, a small number of billionaire owners have a powerful hold on so much of the information that reaches the public about what’s happening in the world. The Guardian is different. We have no billionaire owner or shareholders to consider. Our journalism is produced to serve the public interest – not profit motives. And we avoid the trap that befalls much US media – the tendency, born of a desire to please all sides, to engage in false equivalence in the name of neutrality. While fairness guides everything we do, we know there is a right and a wrong position in the fight against racism and for reproductive justice. When we report on issues like the climate crisis, we’re not afraid to name who is responsible. And as a global news organization, we’re able to provide a fresh, outsider perspective on US politics – one so often missing from the insular American media bubble. Around the world, readers can access the Guardian’s paywall-free journalism because of our unique reader-supported model. That’s because of people like you. Our readers keep us independent, beholden to no outside influence and accessible to everyone – whether they can afford to pay for news, or not. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
06/04/2023 |
The Washington Post ran a story about marketers and copywriters losing their jobs to AI. The key paragraph Experts say that even advanced AI doesn’t match the writing skills of a human: It lacks personal voice and style, and it often churns out wrong, nonsensical or biased answers. But for many companies, the cost-cutting is worth a drop in quality |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
06/03/2023 |
The fashion exec’s guide to generative artificial intelligence. See attached PDF |
The fashion exec’s guide to generative artificial intelligenceGenerative AI, which can create content in the form of images, videos and text, is a new field for fashion and retail that promises to aid creativity and business processes — but also introduces new considerations. BY ![]() Photo: Dmitrii Rykunov This article on generative AI is part of our Advanced Membership package. To enjoy unlimited access to The Long View from Vogue Business and bi-monthly Market Insights Reports and webinars, sign up for Advanced Membership here. Generative AI has quickly captured the attention of fashion and beauty executives who are excited by its potential to aid in both creative efforts and in more practical ways, including building imaginative ad campaigns, improving customer service interactions and streamlining the process of writing product descriptions. Generative artificial intelligence A TYPE OF MACHINE LEARNING THAT IS INFORMED BY LARGE DATA SETS (OFTEN FROM THE INTERNET) TO CREATE NEW CONTENT, SUCH AS IMAGES, VIDEOS AND TEXT AI in fashion, beauty and retail is not new. However, the range of generative AI tools available now are far more advanced, and far more accessible, than previously possible. In 2022, consumer-facing image-generation tools, including Dall-E 2 (created by OpenAI) and Stable Diffusion (created by Stability AI) became widely used, followed by the release of OpenAI’s chat interface ChatGPT, which gained 1 million users in five days. Since then, an improved version of OpenAI’s GPT (or “generative pre-trained transformer”) model, GPT-4, became available, and OpenAI launched an API in March that enables businesses to integrate it into their apps, websites, products and services. Google released its conversational AI service called Bard, and companies including Microsoft, Salesforce and Amazon have released generative models. Amazon’s cloud customers can now use generative AI to produce text and to improve web-search personalisation, while Shopify merchants can use ChatGPT tech to write product descriptions. While it’s still early days in terms of quality, the more that brands and consumers use and refine the tools, the better they become. Fashion and beauty experimentation has begun: Revolve and Prada Beauty have created marketing imagery, Kering and Zalando have introduced customer-facing chatbots, G-Star has designed a denim collection, Levi’s has created e-commerce models and Adore Me is using it to write product descriptions. “We’re in this phase where everyone’s expectations are so high and I’m trying to bring people back down to Earth,” says Adore Me VP of strategy Ranjan Roy. ![]() The global market size of generative AI is expected to grow from $7.9 billion in 2021 to $110.8 billion by 2030, according to Acumen Research and Consulting. Generative AI could add between $150 billion and $275 billion to the apparel, fashion and luxury sectors’ operating profits in the next three to five years, McKinsey says. “I never thought I’d see something like this in my lifetime,” says Rupert Breheny, a Google technical specialist in AI augmented creativity, who has experimented with creating never-before-possible fashion imagery. These recent developments have led to a wider interest in AI in general. Mentions of AI in earnings calls overall have risen almost 79 per cent in the past year, while 22 per cent of the startups in the winter 2023 cohort of technology startup accelerator Y Combinator were focused on generative AI, according to CB insights. Creative industries such as entertainment, fashion and marketing will be the most likely to feel the impact of generative AI first, MIT Technology Review predicts. And, 74 per cent of retail and e-commerce companies plan to increase their spending in AI in the next three years, according to AI data training and software firm Scale AI. It found that while every industry is planning to increase its AI budgets in the next three years, retail and e-commerce were among the more eager sectors. ![]() Companies can incorporate generative AI in multiple ways, ranging from creating their own generative models (which enables more control but is more most challenging and costly) to using open-source models (which still require expertise) or cloud API tools such as GPT-4 (which are faster to use but can be less accurate to specific business needs). ![]() It’s also early days in terms of best practices; already, brands and creators are navigating new ethical, legal and technical complexities. Some of these are more conceptual; the term “AI” has long conjured fears of a dystopian future in which robots gobble up jobs and become out of control. Other considerations are more immediate, spanning copyright, identity and the needs to fact-check seemingly expert responses. Experts tend to agree that a major shift is already underway. “We should really be careful because it is the start of something,” says Alessia Glaviano, head of global PhotoVogue and director of the PhotoVogue Festival. “The possibilities are endless, and the only limit is good taste and creativity.” Table of contents: Generative images and videoGenerative art largely refers to images and videos that are created via text-based prompts. It can be used to generate variations on a provided theme — essentially remixing and reimagining photo prompts — or be used to create myriad combinations of a set list of elements. In fashion and culture, some of the most popular examples include PFP collections, which generated 10,000 versions of avatars based on a set number of supplied traits, or apps that ingested a person’s submitted selfies to create portraits. ![]() More recently, publicly available tools including Dall-E 2, Midjourney and Stable Diffusion have enabled people to create what appear to be expert-quality paintings and photography with just a few words. The uses in fashion are still expanding, but already, people are building out full clothing collections, generating diverse ranges of models and creating ad campaigns and photoshoots. Generative photoshoots Generative art can be used for creating imaginative photoshoots and films that are faster, more efficient and more imaginative than what would normally be possible in traditional photography. With budgets shrinking, it can be less expensive to replace a traditional photoshoot with AI-created elements, PhotoVogue’s Glaviano says (costs would largely be focused on the creative team and the software, rather than costs of flying to destinations and location fees). Retailer Revolve displayed a range of billboards created with generative AI that showed models among larger-than-life, mushroom-like landscapes. “We can’t build Chanel runway sets and spend millions of dollars doing these crazy things, but we thought, we can do something really special here,” said Revolve CEO and co-founder Michael Mente. These types of projects can still be complex, Glaviano notes, referencing a May cover of Vogue Italia that portrayed Bella Hadid posing against a background that was created using generative AI programme Dall-E. That process, she says, was quite elaborate, because the team wanted to create something in the most ethical way possible, in terms of creating artificial humans or creating content that is positioned as photography. This included collaborations between photographer Carlijn Jacobs and AI artist Chad Nelson. Similarly, in December, fashion photographer Nick Knight unveiled 8,000 one-of-a-kind surrealist artwork NFTs, made using more than 200 traits encompassing fashion, makeup, hair and nails. In April, Prada Beauty debuted a social media campaign for five of the brand’s fragrances that manipulated traditional images taken by photographer Johann Besse. ![]() Photo: Marco Glaviano Hybrid approaches like these, which combine “traditional” methods of creativity with fantastical, generative AI-created elements, might be smart as brands and creators navigate ethical boundaries, Glaviano advises. She adds that AI-generated images might also be better considered illustration, rather than photography. As Knight said, “It is up to the artist to make this digital art as soulful as a painting.” Clothing design Generative art can be used to design, or aid in designing clothing, which can both open up the field to people without traditional skillsets, speed up the prototyping process and inspire creatives by generating unexpected elements. While not every result is perfect, the speed of creation makes it easier to experiment. “Fifty to 70 per cent of the results might be trash, but 30 per cent are not — and it did it in three minutes,” says Anthony Lupo, chair at law firm ArentFox Schiff, which advises clients ranging from Victoria’s Secret to YSL, Diesel and Lacoste on technologies including NFTs and generative AI. Glaviano compares it to the advent of the iPhone, but even more impactful. In January 2023, supply chain startup Cala became one of the first to integrate Dall-E technology into its product offering. People can input text-based prompts into Cala’s proprietary dashboard to generate and modify design suggestions, preview them as illustrations or photorealistic depictions, then proceed to produce them physically. Similarly, 3D product creation company Threekit recently introduced ThreekitGPT, which enables brands and consumers to customise products via AI chat. Tools like these can be constrained to a brand’s product catalogue, meaning that eventually, customers could personally generate garments that are still a fit for the brand’s look and feel. Multimodal THE TERM USED TO REFER TO A GENERATIVE MODEL THAT IS ABLE TO CONSUME AND GENERATE BOTH IMAGES AND TEXT. Already, G-Star launched a 12-piece denim collection with cape-like garments designed using AI. Revolve plans to physically produce a small selection of garments that were recently submitted to the first AI Fashion Week, which invited aspiring designers to create collections of 15-130 pieces using AI. The competition included a public presentation in New York in April and a team of expert judges including Tiffany Godoy, Vogue Japan’s head of editorial content; Natalie Hazzout, Celine head of men’s casting; and Erika Wykes-Sneyd, VP of Adidas’s Three Stripes Studio. Not all generated designs necessarily need to be constrained to the physical realm. Avatar platform Ready Player Me allows people to customise avatar garments using an AI generator, and in February, Vogue Runway illustrated menswear trends by commissioning an 11-look digital capsule. Models and people Generative AI can be used to create images of fictional characters, to manipulate images of existing people and to animate a person’s imagery to create videos. This is one of the most intriguing uses for fashion, because it stands to change how brands work with models, but the usage of people makes it inherently more complex. Various companies, including Lalaland.ai and Vue.ai enable brands to generate fictional models who can be digitally dressed in clothing for use in e-commerce and advertising imagery. During the pandemic, the reasoning behind this process became more urgent; brands looked to AI to help regionalise their e-commerce pages, and to relatively quickly generate a broader range of model likenesses. In March, Meta partnered with AI-focused retail automation company Vue.ai to enable brands to generate AI-powered models in ads; this was closely followed in April by Levi’s, which announced that it would begin testing technology from Lalaland that would allow the retailer to display on-model photography on a wider range of models, by digitally dressing fictional people created with AI. The London College of Fashion’s Innovation Agency created an entirely fictional catwalk, complete with AI models and AI clothing, based on a recent LCF MA fashion show. ADVERTISEMENT It’s not all fictional personalities. Recently, supermodel Eva Herzigová digitally captured her face, body and movements, with the intention of using the data to recreate her likeness in future ad campaigns and photoshoots. And, beauty intelligence startup Haut.ai’s “SkinGPT” tool uses generative AI to show people what their skin might look like in the future, based on their skincare regime. Read more: AI menswear, Generative art, AI Fashion Week preview, AI Fashion Week review, Eva Herzigová’s digital twin, IoDF wearables ![]() Gallery: Generative AI in fashion and beauty 12 PHOTOS BY MAGHAN MCDOWELL VIEW GALLERY Generative text and chatThe ability for generative AI to create human-like, conversational and believable text has prompted a surge in experimental uses, both customer-facing and internal. Customer-facing tools have largely focused on next-gen chatbots that provide advice, answer questions and help in more free-flowing formats than traditional chatbots. Internally, brands are using tools such as ChatGPT to generate marketing copy, product descriptions and more. ![]() Clienteling and customer service OpenAI’s ChatGPT tool opened up to developers in March 2023, and retailers immediately began integrating the technology for use in customer service and personal shopping capabilities on apps and websites, because the tool can be limited to a brand’s parameters and trained to improve its responses. ADVERTISEMENT Kering has an experimental site called KNXT, where it added a personal shopping chatbot called Madeline (using ChatGPT), which can recommend what to wear for any occasion, including suggesting products from Kering-owned brands. In a statement, the company, which owns brands including Gucci and Balenciaga, said that this was part of Kering’s innovation strategy to identify and develop potentially disruptive business models, and KNXT was specifically made to test “innovative digital experiences with the young generation”. RLHF THIS STANDS FOR “REINFORCEMENT LEARNING FROM HUMAN FEEDBACK”, WHICH, IN MACHINE LEARNING, REFERS TO A TECHNIQUE THAT TRAINS A MODEL WITH HUMAN FEEDBACK, OFTEN VIA HUMAN-PROVIDED RANKINGS TO SCORE OUTPUTS. Similarly, German e-tailer Zalando is testing a “fashion assistant”; peer-to-peer marketplace Mercari launched a shopping assistant called Merchat AI; multi-brand retailer Ssense uses ChatGPT to generate outfit ideas shared on Instagram; startup Sociate created a multimodal chatbot to help resale e-commerce shoppers find products; and payments company Klarna added a ChatGPT integration for product recommendations. And, in a hardware integration, smart eyewear maker Innovative Eyewear introduced 15 styles of smart glasses enabled with ChatGPT that enables a spoken word version of ChatGPT for wearers. Marketing and more Many are experimenting more quietly with ChatGPT behind the scenes. In addition to Shopify and Amazon’s recent business-to-business integrations, e-commerce personalisation provider Bloomreach (which counts Neiman Marcus and Benefit Cosmetics as customers), integrated OpenAI into the company's content management solution (CMS). ![]() Photo: Revolve Lingerie brand Adore Me (which was acquired by Victoria’s Secret in January) has been experimenting with generative AI for the past year, including training product and marketing teams, and starting with using the technology on SEO-optimised product descriptions. It has found that it saves up to 30 hours a month on copywriting, which allows employees to spend their time on more creative parts of their role, VP of strategy Roy says. This has led to a 40 per cent increase in website traffic. ADVERTISEMENT “The way we started with our copywriters is asking, ‘What do you hate doing the most?’ and they said product descriptions, SMS, email subject lines. That’s good, because when something's repetitive and structured, it can work,” Roy says. “Now, instead of 30 hours, it takes a copywriter about 30 minutes in a month to put together 120 product descriptions.” Still, he adds, there is human oversight: copywriters are served three choices and able to make changes before hitting publish. Read more: **Digital clienteling, AI in secondhand shopping, Shopify integration Regulations, challenges and ethical considerationsThese news tools present unprecedented opportunity and risk. These include ethical questions (“Who controls a person’s likeness after they die?” and “Should brands disclose if part of an image has been generated using AI?”) to questions around copyright, trademark and accuracy (“Who is responsible if a machine copies trademarks?” and “What if the chatbot is wrong but believable?”). There are also concerns about AI taking jobs and spreading disinformation. “There is nothing that will happen in our lifetime that is more existential than this, and people are really grasping with what to do with this,” says ArentFox Schiff’s Lupo. The space is additionally complicated because it’s moving very quickly, he adds. “In six months, we would have a whole different conversation.” Ethics A number of conflicts have already played out. In response to Levi’s announcing that it would be adding more model diversity via AI-generated models, many wondered if this would remove job opportunities for human models. In response, Levi’s clarified its positioning, saying that it was physically impossible to photograph every individual stock-keeping unit on a wide swath of people, and that it would not decrease the number of human models it hired or its efforts at diversity and inclusion throughout the company. There are additional questions regarding representation, as many models ingest and amplify the human biases that proliferate online. The Institute of Digital Fashion’s digital wearable campaign, which was designed in part using AI, had to manually add diversity to digital models, while Asics launched an AI training programme after finding that prompts related to “healthy exercise” generated unrealistic and harmful images. Prompt engineer SOMEONE WHO SPECIALISES IN CREATING AND REFINING PROMPTS FOR GENERATIVE MODELS THAT RESULT IN THE DESIRED OUTCOME. More broadly, people are still worried about the impact these tools will have on jobs; 77 per cent of consumers think AI will lead to job loss, Forbes reports. Others, particularly younger generations, are more optimistic about AI’s potential for job-creation, per Atlantic Recruiters, with possible new roles including prompt engineers and empathy trainers. Mente, of Revolve, says that computer-aided design won’t replace jobs, while PhotoVogue’s Glaviano has found that the skills are transferable, with talented photographers turning into talented prompt engineers. Intellectual property There’s also the question of intellectual property. “The copyright issue is going to become central to the debate,” Adore Me’s Roy says. “How people are attributed, whose works are created, and how, has to be addressed. Creating a painting in the style of Van Gogh is fine because Van Gogh is not here, but contemporary art is definitely something that needs to be solved.” ADVERTISEMENT To that end, Stability AI, Midjourney and DeviantArt, which enable people to “create art in the style of” specific brands and artists, have faced lawsuits saying that they are stealing artists’ work. And, many brands are closing watching artists who generate clothing designs that heavily reference trademarks and logos; Balenciaga (in which an artist digitally dressed the Pope, using Midjourney) and Nike are often key points of inspiration. Brands still have trade dress protection, and whether something was designed by a creative agency or a computer doesn’t impact how the law is interpreted, ArentFox Schiff’s Lupo advises. However, he adds, brands might use AI to help design something that is similar without infringing. TikTok is said to be developing disclosures for AI-generated video, although this could be tricky territory if content is a blend of fictional and authentic, the same way that Photoshop is used today to alter imagery. Lupo supports the idea of a “AI watermark”, which could be used to label if a video is fictionalised. Regulations In March, Italy temporarily banned ChatGPT due to concerns over lack of transparency and data inaccuracy; this is one reason why a new clienteling chatbot created by GameOn for Valentino did not use GPT tech. The ban has since been lifted, after parent company OpenAI agreed to inform users how it’s processing data, give the option to opt out of data-sharing and put limits on usage by minors. New guidelines are in development, with Italy, the US, China and the UK exploring regulation. The US Copyright Office has advised that AI images generated from text can’t be copyrighted in the US, and the National Telecommunications and Information Administration (NTIA) has requested input on policies that shape AI accountability. The Cyberspace Administration of China (CAC) is considering requiting generative AI service providers to be responsible for the validity of data. The EU’s AI Act establishes a “risk-based” framework, while the UK’s AI White Paper hopes to “make the UK a science and technology superpower by 2030” through five guiding principles. More is necessary, say high-profile tech executives, including Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and Meta president of global affairs Nick Clegg, who have called for pauses on experiments and global regulation. Tech regulation has been a long time coming, and these types of considerations and concerns are seen as a natural byproduct of innovation — even if the pace has intensified. “We had the same type of discussions in the industrial age,” Lupo says. “In my personal opinion, Pandora's box is open, and we need to figure out how to use it.” |
thefashionexecsguidetogenerativeartificialintelligence2.pdf | view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
05/30/2023 |
Thanks to LLMs and the conversational interface, scientists and database engineers are spending less time writing queries and analyzing datasets and more time asking research questions. That said, some have raised doubts about the accuracy of LLMs, citing their tendency to “hallucinate” information when they find no obvious answer. In our view, more experience with “prompt engineering” will limit hallucinations with time. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
05/17/2023 | MosaicML And Anthropic Continue To Expand AI’s Potential |
MosaicML And Anthropic Continue To Expand AI’s Potential ARK Invest_Illustration_Will Summerlin_Final_Circle 400 px By William Summerlin | @summerlinARK Co-Lead ARK Venture & Analyst Recently, MosaicML’s release of MPT-7B3 and a boost in Anthropic’s Claude context window to 100,000 tokens4 have highlighted the power of expanding context windows, the efficiency of MosaicML’s training platform, and the potential for open-source AI models. MosaicML's MPT-7B open-source model stands out for its extensive training on a vast amount of text and code, addressing the limitations of prior open-source models and offering an alternative to the commercial foundation models from OpenAI, Anthropic, and Cohere. Notably, the new model tackles issues related to context length restrictions by accommodating inputs up to 84K tokens, matches the quality of Meta’s LLaMA-7B, and highlights the efficiency of training large models on MosaicML’s platform. Compared to the ~$5 million cost of GPT-3's final training run in 2020,5 Mosaic trained MPT-7B at a cost of only $200k, with no human intervention.6 Expanding its context window from 9,000 tokens to an impressive 100,000, or ~75,000 words, Anthropic's Claude now can analyze and interpret extensive documents more effectively and efficiently with synthesized answers to complex questions. Increased context windows also will enable users to analyze long-form documents like financial statements, research papers, grant proposals, and legal documents. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
05/11/2023 | Should AI be Open or Closed? |
Should AI be Open or Closed? Is it OK for the largest AI platforms to be closed and private? Or should they all be open and transparent? Why should you care? Because AI has the power to completely transform every industry, drive a new world order, and create the next dozen trillionaires. It’s the single most important tool we’ve ever created to solve the world’s biggest problems. Should that power to be concentrated in the hands of a few corporations, or should it be democratized and accessible for more people around the world? I recently explored this question with Emad Mostaque, CEO of Stability AI, one of the leading AI companies in the world. Emad made a conscious choice to make Stability AI an open platform that harnesses the technology to create a more abundant future for everyone. AI by the people, for the people. Emad sees AI as “infrastructure for the next generation of human thought… it should be a commons that is accessible to everyone.” I agree. In today’s blog, I’ll summarize my discussion with Emad about open versus closed AI systems and explore why open-source and transparent AI is the best path for using the technology to uplift humanity. Let’s dive in… What Exactly Is Open Source vs. Closed Software? Generally speaking, software can be categorized into two types: open source and closed source, also known as proprietary software. Open-source software’s code is available to everyone to inspect, modify, and enhance. This encourages collaboration, with developers across the globe contributing to the software’s evolution and refinement. Popular examples include the Linux operating system and the Apache HTTP Server, which powers roughly 25% of the world’s websites. On the other hand, closed or proprietary software is owned by a company, an individual, or an entity that holds exclusive legal rights. The source code isn’t shared with the public for viewing or modification. Examples include Adobe Photoshop and Microsoft Windows. In the world of AI, perhaps the most popular current debate about the merits of open versus closed systems involves OpenAI, creator of ChatGPT and other generative AI tools. In 2015, Elon Musk, Sam Altman, and others founded OpenAI as truly open with a mission to help guide the development of AI for humanity’s benefit. OpenAI began as a non-profit, and as a co-founder, Elon donated $100 million into the organization. Although OpenAI initially vowed to share its underlying code with the world, it has since turned from a non-profit into a for-profit corporation and is anything but open. Elon has cut ties with OpenAI and has announced a new AI company (X.ai) to rival it. As Elon tweeted back in February: “OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft... Not what I intended at all.” Yet even as this particular debate over OpenAI rages on, there is increasing evidence that open AI systems are the future… The Growing Influence of Open-Source AI Emad is leading the charge for open-source AI with Stability AI, which has a community of open-source research hubs with over 140,000 members globally—all working together to build the future of AI. The company’s open-source text-to-image generator Stable Diffusion “broke the internet” when it was launched last August and set off a wave of open-source development. As Emad points out, Stable Diffusion overtook both Bitcoin and Ethereum in developer popularity in just 3 months and went on to earn 50,000 GitHub stars in only 150 days. Cumulatively, the whole ecosystem around Stable Diffusion has even overtaken Linux. What took 20 years, Stability AI and Stable Diffusion accomplished in just 5 months. How did Stable Diffusion achieve such reach so quickly? As Emad puts it, “Because we gave it to everyone, and it runs on your MacBook or your iPhone without internet.” This set Stable Diffusion on a different path from closed OpenAI’s Dall-E image generator. Stable Diffusion’s open model encouraged integrations, marketplaces, and user interfaces that simply didn’t happen for Dall-E. And Stable Diffusion is just one Stability AI’s products. The company and its global community have launched several other projects: from EleutherAI, which has an open-source version of GPT-3 that has been downloaded 25 million times; to Harmonai’s Dance Diffusion, the most advanced audio model in the world. Some AI experts are saying that we’re now experiencing a “Stable Diffusion moment” for large language models (LLMs) themselves, with a flurry of open-source models released in recent months. Here are just a few examples: In March, a cross-university collaboration released Vicuna, a 13-billion parameter model that “achieves parity” with Google’s Bard, and cost just $300 to train. Last month, Berkeley launched Koala, an open version of ChatGPT. The model cost $100 to train and studies suggest that real people can’t tell the difference between Koala’s open model and ChatGPT. Also in April, Stability AI launched its own open-source LLM with 3 to 7 billion parameters, with plans to reach 15 to 65 billion parameters soon. All these developments are throwing into the question the merits of closed LLMs. Just last week, an anonymous researcher at Google leaked an internal document saying, “We have no moat, and neither does OpenAI.” As the author of the leaked document points out: “Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.” So, what are the real-world implications of increasingly dominant open-source AI systems? Open-Source AI Systems and Abundance Open-source AI made available to everyone effectively has the potential to make all of us the equivalent of millionaires and billionaires, when you can create whatever you want. You can access the world’s best education for your child, regardless of where you live and how much money you have. You can get the best healthcare available, and have customized entertainment. We’re creating a world in which the cost of anything will essentially be the raw materials and electricity. For Emad, it all comes down to information being in the right place at the right time to create the maximum possible impact: “When we create an open architecture for the world, then we can augment and replace our existing systems.” Take education. Say you go to an African nation and teach every child with a personalized and effective AI tutor. As Emad notes, “within a few years they will be outcompeting children at the top schools in New York City.” Now, what if you give those same African children the ability to then code the AI system and improve it? It becomes a virtuous cycle: the kids are helping to train the AI and the AI is helping to train the kids. And you make this process open and transplantable. Emad’s ideal view of the future of open AI systems is what he calls “an intelligent internet.” Each person, country, and company has their own AI models, and they're all interacting with each other in an optimal way for humanity. “Humans coming together to solve the world's problems augmented by technology.” Why This Matters Emad and I share a common belief that people are fundamentally good. And if you believe that humans are good at their base, and you’re enabling them with increasingly powerful technology, you also believe that they will use that technology to make the world a better place and solve problems for the whole. Open and transparent AI systems can help us create this future. They offer us an opportunity to give every man, woman, and child access not only to food, water, effective healthcare, and education, but also agency—the ability to dream and the tools to fulfill those dreams. In our next blog in this AI Series, I’ll share a summary of Mo Gawdat’s fantastic book Scary Smart and explore the notion that we are birthing a new species of intelligence, namely AI. As its parents, we need to actively decide what type of child we want to bring into this world. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
05/10/2023 | Palantir's AIP will also have civilian applications. Another demo video shows how a manufacturing company could use it to prepare for a hurricane strike by analyzing operations at its distribution centers and deciding whether to accelerate, delay or cancel orders. It will be capable of forecasting the impact to customer orders and revenue in these situations, the video shows. It applies a LLM to datasets in companies for users to use prompt engineering to get answers. | view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
05/10/2023 |
https://github.com/JushBJJ/Mr.-Ranedeer-AI-Tutor Here’s an insane education prompt I found on the OpenAI discord. It lets you to set up your custom AI tutor, adjust the depth of knowledge, teaching style and reasoning framework to match your learning needs. Build a Custom LLM Chatbot on Your Enterprise Data: How to leverage the Retrieval-Augmented Generation (RAG) Architecture for your use case |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
05/10/2023 |
https://nwo.ai/ - crunches data to help spot consumer trends |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
05/09/2023 | MLS aims to reach “every kid playing soccer in North America” with Ai.io scouting tech |
MLS aims to reach “every kid playing soccer in North America” with Ai.io scouting tech New programme will scout young players by using computer vision and AI. 4 MAY 2023 ED DIXON NEWS Getty Images SHARE THIS ARTICLE AiScout app lets players complete drills via their mobile and be scouted for free Programme will encourage more people to engage with MLS League wants to apply AI tech across its wider business Major League Soccer (MLS) has partnered with technology company Ai.io in a move the competition hopes will enable it to “identify and reach every kid playing soccer in North America”. The agreement will help provide aspiring players with the chance to be scouted by MLS for free through the use of computer vision and artificial intelligence (AI). Under the new virtual trial programme, players can download Ai.io’s AiScout product and complete a series of assessments and drills in any backyard, field or open space via their mobile phone. The app will analyse players’ ability and, through a fully connected platform, MLS will be able to evaluate players via the data generated. Players who score highly will be offered a chance to train with MLS at events across the US and Canada. Available in December 2023, clubs across MLS, as well as the MLS Next Pro and MLS Next development leagues, will have access to AiScout and the AiLabs analysis platform. From January 2024, AiScout will also be available to players across the globe. MLS’ goal is for Ai.io’s technology to eliminate barriers such as cost, geography and time commitment that traditionally limit the accessibility of talent discovery programmes. “We see it as the great chance to help identify and reach every kid playing soccer in North America,” Chris Schlosser, MLS’ senior vice president of emerging ventures, told SportsPro. “We think it’s a tremendous addition to our youth efforts.” “The whole MLS team bought in very early,” added Richard Felton-Thomas, chief operating officer and director of sports science at AiScout. “I think democratising a process here is largely what this is about [and] get some data upfront for the scouts. Any player has the opportunity to say ‘look, I’m good enough and someone’s worth having a look at me’. Even from their garden they can now provide that data upfront. If the numbers are good enough, they’re going to get that opportunity. They can take some ownership of it. “I think that certainly transcends across all of football. But it transcends across all of sport, particularly team sports. Every team sport has this problem not just football.” RELATED POSTS SportsPro Reviews… MLS Season Pass on Apple TV MLS content partnership sees TikTok deepen sports ties Ai.io’s platform has already helped find players for Premier League giants Chelsea and newly-promoted Burnley. More broadly, Schlosser believes the technology will encourage extra people to engage with MLS. “We’re looking to engage with anybody who likes the sport of soccer,” he said. “The new Apple partnership, which makes it really easy to watch all of our matches on any device around the world, is testament to that. “We think these two things tie together and if kids start playing and are part of an MLS programme, then maybe they’ll want to watch a match.” Schlosser added that MLS will continue to look to emerging technology, including AI, to help grow the league’s wider business, describing the programme with Ai.io as a “great initial use case”. “We don’t want to just do technology to do technology,” he continued. “But we fundamentally believe at the core of our business that if we are on the cutting edge and using the best technology, we can help make the fan experience better, accelerate fan growth, create better play on the field.” According to Schlosser, MLS has “the youngest fans in American pro sports by a longshot”. That contributed to the league’s decision to ink its ten-year, US$2.5 billion deal with Apple to show all matches globally via Apple TV+. The streaming tie-up forms one part of MLS’ wider mission to be at the forefront of emerging tech such as AI, 5G, Web 3.0 and the metaverse, which it plans to leverage through various projects and partnerships. “We think we can apply the latest in AI technology across a whole host of different areas of our business,” explained Schlosser. “At the start of this year, we announced a partnership with Sportec, [which] we think is the best in the world at creating soccer data. We’ve installed all-new computer vision tracking systems in all of our MLS and Next Pro stadiums. “They’re creating mountains of data from every match and we’re starting to utilise that across the enterprise, whether that’s for broadcast, fans, or for our coaching and scouting departments.” As for London-based Ai.io, Felton-Thomas reveals the company has had a conversation with “most of the major leagues” and hopes that programmes like this become the norm. Soccer will be the initial focus but others sports are also on its radar for the future. “You can have the early adopters and then you have the people that are going to follow that,” he said. “The excitement for us is across football, get that nailed. But, absolutely, there’s going to be other sports involved in this.” |
Consumer Discretionary | Consumer Discretionary: Leisure Products | Hobbies/Lifestyle & Outdoor | MLS | view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/29/2023 |
We must remember that the data we’re using to train these large language models (LLMs) isn’t made up. It’s our data: it’s the sum total of humanity’s data during the past 50 years! What we've written on our websites and in our Facebook posts. It represents who we are, how we talk to each other, and how we think about things. In his book Scary Smart, Mo Gawdat says that with AI, we’re raising a new species of intelligence. We’re teaching the AIs how we treat each other by example, and they’re learning from this. I agree with Gawdat. I’ve even started saying “Good morning” and “Thank you” to my Alexa! Just as we teach our children to be empathetic, respectful, and ethical, we must instill these values in our AIs to ensure that they are tools for good in society. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/27/2023 |
AI Utilitarian (basic) needs will be ruthlessly fulfilled by AI - “order me Mexican food with +4.8 stars”. Brand and marketing won’t matter in many categories. Brand and marketing no longer matter in this basic category Defensibility in age of AI is the human experience |
![]() |
screenshot_20230426200315.png | view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/27/2023 |
“At Maison Meta, we have created our own custom workflow that sits across various different AI tools, such as Midjourney, Stable Diffusion with custom models, plus Deform, Wrap Fusion, EbSynth (bring paintings to animated life), ControlNet,(a neural network structure to control diffusion models by adding extra conditions, a game changer for AI Image generation) StyleGan3 (,generative adversarial network introduced by Nvidia researchers in December 2018, and made source available in February 2019 ) TouchDesigner, (Combining built-in 3D to simulate real-world objects, multi-projector outputs, and ultra resolution video engine) and Notch (motion graphics) that allows us to create the quality that we produce with the ultimate control on the final output.” |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/25/2023 |
Zegna rolls out AI-powered luxury clienteling developed with MSFT. The technology pairs Zegna’s store associates with an AI-powered recommendation system. A 3D configurator draws up to some 49 billion potential outfit combinations. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/22/2023 |
Abacus.AI - AI-assisted data science and end-to-end LLM and MLOps platform that enables real-time deep learning and AI agents at scale for common enterprise use cases. You can train your own custom models or use our AI agents combined with LLMS to create highly-accurate applied AI systems and operationalize them in production. |
Abacus.AI is the world's first AI-assisted data science and end-to-end LLM and MLOps platform that enables real-time deep learning and AI agents at scale for common enterprise use cases. You can train your own custom models or use our AI agents combined with LLMS to create highly-accurate applied AI systems and operationalize them in production. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/19/2023 |
Added Staiyl as AI driven apparel designer and manufactuyrer as example |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/18/2023 |
Sam Altman, CEO OpenAI says further progress will not come from making models bigger. “I think we're at the end of the era where it's going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We'll make them better in other ways.” |
THE STUNNING CAPABILITIES of ChatGPT, the chatbot from startup OpenAI, has triggered a surge of new interest and investment in artificial intelligence. But late last week, OpenAI’s CEO warned that the research strategy that birthed the bot is played out. It's unclear exactly where future advances will come from. OpenAI has delivered a series of impressive advances in AI that works with language in recent years by taking existing machine-learning algorithms and scaling them up to previously unimagined size. GPT-4, the latest of those projects, was likely trained using trillions of words of text and many thousands of powerful computer chips. The process cost over $100 million. But the company’s CEO, Sam Altman, says further progress will not come from making models bigger. “I think we're at the end of the era where it's going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We'll make them better in other ways.” Altman’s declaration suggests an unexpected twist in the race to develop and deploy new AI algorithms. Since OpenAI launched ChatGPT in November, Microsoft has used the underlying technology to add a chatbot to its Bing search engine, and Google has launched a rival chatbot called Bard. Many people have rushed to experiment with using the new breed of chatbot to help with work or personal tasks. Meanwhile, numerous well-funded startups, including Anthropic, AI21, Cohere, and Character.AI, are throwing enormous resources into building ever larger algorithms in an effort to catch up with OpenAI’s technology. The initial version of ChatGPT was based on a slightly upgraded version of GPT-3, but users can now also access a version powered by the more capable GPT-4. Altman’s statement suggests that GPT-4 could be the last major advance to emerge from OpenAI’s strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place. In the paper describing GPT-4, OpenAI says its estimates suggest diminishing returns on scaling up model size. Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/18/2023 | Role of artificial intelligence in the fashion industry |
Role of artificial intelligence in the fashion industry Showcase from competitor for the upcoming AI Fashion week. Picture: Instagram/fashionweek.ai Showcase from competitor for the upcoming AI Fashion week. Picture: Instagram/fashionweek.ai Published Apr 15, 2023 SHARE Share on Facebook Share on Twitter Share on LinkedIn Share on Telegram Share on WhatsApp Copy to Clipboard Listen to this article 0:00 / 5:40 1X BeyondWords Artificial intelligence, metaverse, and algorithm are all terms that most of us have heard but might not fully understand. I had a conversation with my 19-year-old son about artificial intelligence (AI) and what it all means and found myself utterly blown away, yet still somewhat baffled, by how it will, and in some cases already is, impact various industries. So what is AI? According to Wikipedia artificial intelligence is intelligence demonstrated by machines, as opposed to the intelligence of humans. AI uses techniques like machine learning, deep learning, and natural language processing to enable machines to learn from experience, adapt to new situations, and improve on their own. You’re probably wondering what this futuristic-sounding technology has to do with the fashion industry. While many think that fashion is simply about trends and what designers send down the runway. However, it is an industry that entails so much more than that. Like most industries that have to stay ahead of the crowd, the fashion industry is constantly adapting to the needs of the consumer, staying one step ahead of emerging trends and having a hand on how manufacturing processes are always evolving. Fashion design is a creative process. Behind a fashion brand is a designer who creates collections that could either set trends off the runway or fail miserably. Which can be an expensive process which can lead to waste. Items that don’t quite hit the mark and cannot be sold end up on dump sites. However now with advanced AI technology, designers are able to create collections that are better suited for their market, that will sell better and will cut down on waste. According to Imago, AI algorithms can analyse large amounts of data from social media, fashion blogs, and consumer buying habits to identify emerging trends and predict what styles and colours will be popular in the coming seasons, allowing designers and retailers to stay ahead of the curve. AI can also generate design concepts based on specific inputs, such as colour, fabric, and style, saving designers significant time and effort while allowing them to focus on refining the designs rather than starting from scratch. Many people are under the impression that AI will take over jobs, like fashion designers, but that isn’t possible. While AI technology can definitely allow designers to gain better insight into the market, it will never replace human creative input since the technology is merely a tool and doesn’t have a life of its own. Having said that, it does seem as if fashion models might soon be replaced by AI models. Especially when it comes to online catalogues and online shopping. Levi Strauss has announced that they partnered with Amsterdam-based company, Lalaland.ai, a digital fashion studio that builds custom AI clothing models. Using generative AI, Lalaland specialises in creating super-realistic “models” of every different body type, age, size and skin tone so that online shoppers can then dress them in various outfits and have a better idea of how they might look on them. However, in their press release, Levi Strauss insinuated it would use the model’s customisability to inject more diversity into the fashion space. “We are not scaling back our plans for live photo shoots, the use of live models, or our commitment to working with diverse models,” the spokesperson said. With that in mind, it will be hard to replace human models when it comes to physical runway shows and fashion editorials even though CGI (computer-generated image), not AI, models like Shudu Gram and Lil Miquela have already been used for that purpose. CGI model Shudu Gram. Picture: Instagram/shudu.gram However, it would be fascinating to see what comes out of the first-ever AI Fashion Week happening on 20 April. Showcase from competitor for the upcoming AI Fashion week. Picture: Instagram/fashionweek.ai According to the AI Fashion Week website, they are passionate about the intersection of fashion and technology. “We believe that the combination of these two worlds can create truly innovative designs and push the boundaries of what is possible in the fashion industry. Our goal is to showcase the latest in AI-generated clothing and virtual runway shows, bringing a fresh perspective to the industry.” “We believe that AI technology has the power to change the way we think about fashion and design. By harnessing the power of AI, we can create designs that are truly unique and personalised to individual tastes. We are excited to see the endless possibilities that AI technology can bring to the fashion world, from virtual try-on experiences to personalised clothing recommendations.” “Our AI Fashion Week event is dedicated to showcasing the work of the most innovative minds in the industry. From up-and-coming designers to established fashion houses, we bring together a diverse group of creators to share their vision for the future of fashion. With a focus on creativity and innovation, we are confident that our event will be a game-changer for the industry.” Their Instagram page is filled with images of what we can expect to see during the AI fashion week. Scrolling through the posts one would have to have a trained eye to in fact realise that what you’re looking at isn’t real. Is this what the future holds for fashion? |
Consumer Discretionary | Consumer Discretionary: Textiles, Apparel & Luxury Goods | Apparel and Personal Accessories | view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/17/2023 | Early Open-Source Projects Are Shaping The Evolution Of AI-Enabled Applications |
Early Open-Source Projects Are Shaping The Evolution Of AI-Enabled Applications ARK Invest_Illustration_Will Summerlin_Final_Circle 400 px By William Summerlin | @summerlinARK Co-Lead ARK Venture & Analyst This piece was co-authored by Andrew Kim. Recent releases of Auto-GPT and BabyAGI have showcased the significant potential of so-called autonomous agents, sparking considerable excitement across the AI research and software development communities. Built on large language models (LLMs), agents enable LLMs to execute complex sequences of tasks based on user prompts. Equipped with various tools such as internet and local file access, other APIs, and simple memory architectures, these agents demonstrate early progress in the implementation of recursion within AI applications. While still experimental and flawed, we believe agents have the potential to accelerate the productivity gains enabled by AI hardware/software cost declines. According to ARK’s research, AI software could generate up to $14 trillion in revenue and $90 trillion in enterprise value in 2030. Alongside the progress of foundation models such as GPT-4, many companies are training their own smaller, specialized models. While foundation models are useful across a wide variety of use cases, smaller specialized models offer benefits, including lower inference costs. In addition, many companies concerned about copyright issues and data governance are electing to train their own proprietary models with a combination of public and private data. One such example is a 2.7 billion parameter LLM trained on PubMed biomedical data, which achieved positive results on the US Medical Licensing Exam’s (USMLE) question-and-answer test. Training cost only ~$38,000 on the MosaicML platform, with a compute duration of 6.25 days. By comparison, the final training run of GPT-3 is estimated to have cost nearly $5 million in compute. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/16/2023 | 3 ways fashion ecommerce will evolve through generative AI and ChatGPT |
3 ways fashion ecommerce will evolve through generative AI and ChatGPT Article by Sky Canaves | Apr 13, 2023 Already have a subscription? Sign In Access All Charts and Data Learn about what technologies are transforming your industry Gain exclusive perspectives from top industry leaders Access thousands of data sets and forecasts via our iconic charts Become a Client SHARE TOPICS ARTIFICIAL INTELLIGENCE (AI)ECOMMERCE TACTICSAPPAREL & ACCESSORIESUNITED STATES READ NEXT Asset Thumbnail Buzzfeed, Forbes, and other digital publishers are launching AI task forces Article April, 2023 The Daily: How ChatGPT will change media and ads, retail media boosts advertising, and the EV race in the US Audio April, 2023 5 new AI tools and why they matter to marketers Article April, 2023 Asset Thumbnail Generative artificial intelligence sparks fear, excitement among healthcare experts Article April, 2023 Asset Thumbnail ChatGPT, TikTok ban, and more: 5 charts to prepare marketers for the rest of 2023 Article April, 2023 Fintechs turn to AI buzzwords for easy publicity Article April, 2023 The Weekly Listen: Is AI actually coming for your job, do customers care about values, and the subscription divide Audio April, 2023 Rent the Runway’s subscriber base hits a new high Article April, 2023 Asset Thumbnail Generative AI poised to disrupt finance as BloombergGPT rolls out to the company’s terminals Article April, 2023 GoDaddy bolsters positioning by helping SMBs wrangle generative AI Article April, 2023 Asset Thumbnail Mailchimp uses ChatGPT to tap into email marketing’s AI fever Article April, 2023 Asset Thumbnail Social platforms are drowning in AI-generated spam Article April, 2023 US advertising industry loses 2,100 Jobs in March, marking biggest decline since January 2021 Article April, 2023 Asset Thumbnail 5 charts showing how ChatGPT and generative AI will change marketing Article April, 2023 Asset Thumbnail Consumers look to AI for help with automating tasks Article April, 2023 RELATED CHARTS & ANALYSIS Asset Thumbnail AWS’ AI clearinghouse faces competition beyond Microsoft and Google Article April, 2023 Razor Group embarks on aggregator acquisition spree Article April, 2023 Asset Thumbnail Big cloud server shortage could slow generative AI's breakneck pace Article April, 2023 Asset Thumbnail China leads in public enthusiasm for AI. The US lags far behind. Article April, 2023 OpenAI in talks with Japanese officials about ChatGPT collaboration Article April, 2023 4 creative ways your brand can experiment with livestream shopping Article April, 2023 ChatGPT is the latest in smart wearables Article April, 2023 ‘Arming the enemy’: US investors under fire for funding China’s AI sector Article April, 2023 The Daily: How ChatGPT will change healthcare, bad reviews become positive change, and subscription primary care Audio April, 2023 Google promises an AI search chatbot. Here’s why we haven’t seen it yet. Article April, 2023 Google says AI chatbots in search won’t jeopardize ad revenues Article April, 2023 Nike, Ulta among Gen Z’s favorite brands Article April, 2023 Nearly two-thirds of Walmart stores will be serviced by automated warehouses by 2026 Article April, 2023 Asset Thumbnail Meta to launch generative AI advertising tools, but legal battles are unresolved Article April, 2023 Levi Strauss’ margins shrank in Q1 Article April, 2023 While retail as a whole is looking for ways to deploy generative AI to support operations and enhance customer experience, fashion ecommerce is likely to see some of the biggest impacts. Here’s why: Major fashion brands are early adopters of new tech. They are among the first (and most active) to experiment with emerging concepts and tools such as Web3, the metaverse, AR, and NFTs. ChatGPT and generative AI are no exception. Fashion retail is heavily dependent on ecommerce. Apparel and accessories are the largest single category for US ecommerce, currently making up 18.7% of all digital sales (worth nearly $215 billion in 2023). Around 36% of US apparel and accessories sales will take place online this year, per our forecast, and that share will surpass 46% in 2027. Fashion brands need to solve their returns problem. The challenges of buying clothes online—mainly fit and sizing issues—resulted in nearly a quarter of US apparel and accessories retail sales being returned in 2022, per our forecast. Consumers increasingly engage in bracketing (ordering multiple sizes and returning whatever doesn’t fit) and the costs associated with this practice are unsustainable for many retailers. McKinsey estimates that generative AI could add between $150 billion and $275 billion to the fashion industry’s operating profits over the next three to five years. Attitudes Towards Brand Uses of Generative AI Among US Adults, Mar 2023 (% of respondents) Here are three key ways that brands can use generative AI in a customer-facing capacity. 1. Product search and discovery Fashion brands already use machine learning to analyze consumer data and provide product recommendations, but generative AI has the potential to create hyper-personalized experiences, leveraging direct and immediate feedback to improve search and discovery. Generative AI tools can help shoppers find products through personalized recommendations and conversational search functions. The interactive nature of generative AI exchanges will lead to more engaging experiences. According to Capterra’s February 2023 “Retail Chatbots Survey,” 67% of ChatGPT users said they “often” or “always” felt understood by the bot, compared to 25% of retail chatbot users. As generative AI tools learn more about individual shoppers, recommendations will become more relevant, and interactions may become more like those with a stylist or personal shopper. Klarna recently partnered with OpenAI to launch a ChatGPT plugin that offers curated product recommendations and links to purchase through Klarna’s search and price comparison tools. Generative AI for product search has the potential to replace existing search filters by allowing fashion consumers to make specific queries about exactly what they are looking for using natural language. Risk potential: Brands need to have enough high-quality data to support relevant recommendations, and to stay on top of maintaining it. Brands need to ensure that proprietary data isn’t shared in a manner that could compromise its security. 2. Product description and information Generative AI tools that produce images from text and vice versa will enhance product description pages with richer and more personalized content, highlighting information that is most relevant based on specific consumer needs and preferences. Generative AI for video is advancing rapidly and will allow better 360-degree content to be created from still images. Shopify’s OpenAI-powered tool generates product descriptions with just a few details, such as choice of “tone” and SEO keywords, and it plans to integrate generative AI into more features. Stitch Fix has trained its generative AI model by having its staff write hundreds of product descriptions based on specific prompts, and the company reported that the resulting descriptions produced by AI obtained higher-than-average quality scores. Risk potential: The collection of personal data and role of virtual models can be controversial. Levi’s recent announcement of plans to use AI-generated models drew criticism for failing to advance real diversity. Transparency and quality control are critical when using generative AI tools to create content. More than 70% of respondents in a March 2023 dentsu survey agreed that brands should disclose the use of AI in customer-facing functions. All assets should be reviewed for accuracy and bias. 3. Product customization and co-creation As more people explore the creative potential of generative AI tools, brands can engage them by offering opportunities to co-create products. During Metaverse Fashion Week last month, Tommy Hilfiger hosted an AI design contest to create a digital fashion item in the brand’s style, with the winning design produced into a digital collectible by virtual fashion platform DressX. The upcoming AI Fashion Week in New York, backed by online retailer Revolve, is drawing a mix of professional and amateur designers to experiment with creation tools like Midjourney to conceptualize styles that can be produced in real life. Generative AI tools will allow shoppers to easily customize or suggest products they would like to see. A new wave of AI startups will enable micro-brands to flourish, and an increase in made-to-order and low-volume production can help reduce waste, inventory risk, and returns. Risk potential: Fashion brands may not be ready to share control over design with consumers, but creators are already using generative AI tools to share imaginary branded goods. Brands will need to tackle how to address intellectual property issues associated with generative AI. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/12/2023 |
That Ikea x Patagonia ‘collab’? Generative AI made it happen |
That Ikea x Patagonia ‘collab’? Generative AI made it happenCreative director Eric Groza reveals how he crafted a viral (and fictional) collaboration between Ikea and Patagonia using generative AI.[Image: courtesy Eric Groza] BY JESUS DIAZ3 MINUTE READ Eric Groza—an executive creative director with a long career at advertising firms like Grey, TBWA, and McCann—thought generative AI was far away from becoming a real-world production tool until last November, when the new version of Midjourney launched. The tool’s ability to produce realistic images blew him away, so much so that he instantly found himself obsessed with it. “I was doing late nights just seeing what could come out of the system,” he tells me over video chat from Dubai, where he’s working now. Groza quickly realized that he needed to incorporate AI into his creative arsenal, but it wasn’t clear how. “I was seeing lots of trash, like Bart Simpson if he was a Teenage Ninja Turtle. It was fun but it had no practical application,” he recalls. “I wanted to create a skill set, not do party tricks.” Groza—who is a graphic designer by education and has spent years managing teams of copywriters, art directors, and designers—decided to treat the AI like he would any human creative team member. He would give the AI a brief. First he needed a real project to work on. “I sat down and thought about something practical. I couldn’t do it on a current campaign because the client might have some reservations, but I could come up with a fictional but realistic project and see if the AI could take it from concept to reality.” After brainstorming, he landed on the idea of an Ikea and Patagonia collaboration. “Ikea and Patagonia have a similar background,” he says. “They both make physical products, so there’s something that you can tangibly see and imagine, something that would be in the frame.” It seemed like a perfect opportunity to test Midjourney’s capabilities. The early images were unusable horror pictures. Others looked fine at first, but upon closer inspection were bland and uninteresting. There was a missing ingredient in his process—the thing that would transform his boring images into something striking and novel. That’s when he realized that it all came down to how he worded the brief. As anyone in any creative industry knows, good briefs are hard to come by. “In my experience as a creative director, what you get back from designers is quite random unless you’re very specific,” he says. Soon Groza was feeding the machine tighter concepts. ADVERTISEMENT “I talked about a lamp that I wanted to be geometric or maybe that could fold together,” he says. “I talked about what a sofa would look like if it was made for outdoor use but lightweight enough that it could be easily brought indoors, you know?” In that breakthrough moment, interesting stuff started to materialize in front of him. That’s not to say every image came out as planned. Of the 200 images he generated for the collab experiment, he ended up using only about 10. Still, the process was creatively enriching, uncovering brand associations and visual ideas that he says he would never have come up with on his own. And the AI did it quickly. “In a real-world setting, working with a human team, we’re talking about months compressed to a couple of days to get the output you can see here,” he says. Groza has gone on to create other brand collaborations between the likes of Burberry and British Airways. “When you get into luxury marketing, there’s a lot more sensitivity as far as style goes. It really needs to look on-brand,” he says. The viral images have sparked anger from some creatives, who worry tools like Midjourney will render their skills obsolete. Others appreciate the collaborative effort between human and machine. Regardless, it’s clear that with a tight brief—and a good eye—AI will likely become a reliable collaborator for creatives in the near future. The biggest vote of confidence? Executives at Ikea reached out to express their interest. |
Consumer Discretionary | Consumer Discretionary: Textiles, Apparel & Luxury Goods |
Apparel and Personal Accessories Household Discretionary (Durable goods) |
Patagonia Ikea |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/12/2023 |
I keep seeing more and more #consumerbrands opening up their intellectual property for consumers to co-create, co-profit and co-market. Where #nike is the leader, here's recent additions. |
I keep seeing more and more #consumerbrands opening up their intellectual property for consumers to co-create, co-profit and co-market. Where Nike is the leader, here's recent additions. #cocacola wants fans to use ChatGPT and Dall-E to create brand artwork The #tommyhilfiger brand is hosting an #artificialintelligence fashion challenge to create designs that incorporate the Tommy Hilfiger brand identity. Winning designs, chosen by Hilfiger himself, will be produced as #nft art and accessible via augmented reality. There is also the video that went viral recently showing Roberto Nickson (@rpnickson) create his own lyrics, then overlay Kayne West's voice. Musicians will be creating AI models of their voices for people to do this from which they will figure out a way to earn licensing revenue. Soon, people will create the music they want using the voices they want. This is all leading towards IP As Platform, where brands let their fans run with their assets to do what they want, which could bring in new revenue at healthy margins. |
Consumer Discretionary Consumer Staples |
Consumer Discretionary: Leisure Products Consumer Discretionary: Textiles, Apparel & Luxury Goods Consumer Staples: Food & Staples Retailing |
Apparel and Personal Accessories Food/Beverage/Supplements Hobbies/Lifestyle & Outdoor |
Nike Coca-Cola Tommy Hilfiger |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/12/2023 |
ChatpGPT cheat sheet - see image |
![]() |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/11/2023 | Kering, the second-largest luxury goods group worldwide after LVMH, has launched an experimental “cutting-edge fashion space” KNXT, a space for “curating innovative content and testing new ideas.” |
Kering, the second-largest luxury goods group worldwide after LVMH, has launched an experimental “cutting-edge fashion space” KNXT, a space for “curating innovative content and testing new ideas.” What's groundbreaking about KNXT? It integrates both AI and NFT tech. It features a personal shopper chatbot named ‘/madeline‘, powered by OpenAI's ChatGPT, to help users shop a variety of the group's brands including Gucci, Bottega Venetia, Alexander McQueen, Balenciaga, and more. It can be as simple as asking for recommendations on what to buy. It accepts ETH simply by connecting a wallet, although right now it only ships to eligible states in the US. For the first 100 visitors to purchase using a web3 wallet, it is airdropping NFTs from its La Première collection, featuring illustrated owl characters wearing luxury clothing. Kering’s AI personal shopper launch trails recent news of Gucci and Yuga Labs’ metaverse partnership featuring the KodaPendant. |
Consumer Discretionary | Consumer Discretionary: Textiles, Apparel & Luxury Goods | Apparel and Personal Accessories | Kering | view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/11/2023 |
Watch this video showing how an online personality created his own lyrics, then overlaid Kayne West to do the singing. Musicians will be creating Ai models of their voices for people to do this from which they will figure out a way to earn licensing revenue. People can now create the music they want using the voices they want. https://eddiesoehnel.com/external-platform-content/#hub-dataset-external-view/?view_1573_search=IP%20as%20Platform&view_1573_page=1 |
Consumer Discretionary | Consumer Discretionary: Leisure Products | Hobbies/Lifestyle & Outdoor | view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/07/2023 |
AI tools - see image |
![]() |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
04/06/2023 |
Corporate America talking about AI in tech and comm sectors only |
![]() |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
03/30/2023 |
Added to slideshare overview: With AI and UGC, there will be so much content created that it will overwhelm and it will be hard to capture attention with it. Costs to create will go to nothing and ability to monetize and attract attention off of it will also go to nothing. If it is no longer the content that a brand can use to differentiate, they will have to shift to creating unique experiences to capture customer attention. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
03/30/2023 |
Added to slideshare overview: ChatGPT and other AI platforms are exploding, taking everyone by complete surprise, because the training models are exponentially feeding back to improve (more use = more data = better models = more use = …). ChatpGPT in particular has emerged as the new operating system, where the company does what it does best, which is language models, and it is opening up API’s for companies to plugin to its architecture to contribute their data. The 2000’s was optimizing online around SEO around Google search; the 2010’s was optimizing for FB/Insta; the 2020’s is emerging as optimizing around ChatpGPT. Plug in your product data by building an app or API integration to ChatGPT so that consumers can interact with your data and your products via text prompt. Moved up to now to 2025-ish slide the actions around AI. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
03/29/2023 |
https://twitter.com/rpnickson/status/1639813074176679938 Watch this two minute video where a trained AI model of Kanye replaced the vocals of the original rapper. The results will blow your mind. Utterly incredible. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
03/26/2023 |
Apple has made huge strides in the GPU and Neural Engine parts of their A-series and M-series chips for those devices I mentioned. But they are not yet powerful enough to train and make predictions on a large model on-device. What we are talking about is in 5-ish years, your iPhone will train an LLM on the level of GPT-4 using your data, and be able to run predictions locally with a good user experience. All on-device. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
03/21/2023 | The amazing developments with #chatgpt keep coming at a lightning-fast pace. |
The amazing developments with #chatgpt keep coming at a lightning-fast pace. Just announced that companies can create plugins allowing ChatGPT to interface with their systems to obtain real-time information. This is making ChaptGPT the operating system, shifting the Internet interface from screen/keyboard to screen/voice. ChaptGPT provides the voice/speech, while companies provide their data. ChaptGPT sticks with what it is good at, which is language models, and let companies plug in their real-time information so it does not have to go searching for it. I foresee that #consumerbrands will be plugging in their product listings so that consumers can access via voice. This is really bad for Google. It's hard to believe with their capabilities in AI and their data that they are allowing this to happen. Either they move very quickly to catch up or in a matter of months, they will never recover. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
03/15/2023 | Can AI make secondhand luxury shopping easier? |
Can AI make secondhand luxury shopping easier? AI startup Sociate has developed a new generative chatbot that aims to improve the shopping experience, which is rolling out with resale platform Hardly Ever Worn It. Despite advances, the tech remains a hard sell. BY MADELEINE SCHULZ March 15, 2023 Can AI make secondhand luxury shopping easier Photo: Phil Oh To receive the Vogue Business newsletter, sign up here. Shoppers can take their pick from 30,000 items on British pre-owned luxury resale platform Hardly Ever Worn It (Hewi). But while the search engine helps provide a steer, Hewi’s range represents a daunting challenge. Some consumers — particularly Gen Z — love the resale hunt. One of their top three incentives for buying secondhand is to have more fun while shopping (alongside cost-saving and sustainability), according to Thredup’s 2022 resale report. They’ll trawl through Depop and the RealReal and spend hours sifting through clothing on Poshmark and Vestiaire Collective to find “the one”. However, not all shoppers want to dig. For many, it can be fatigue-inducing, particularly for those accustomed to having an in-person sales associate to guide luxury purchases, as Sarah Davis, founder and president of luxury resale site Fashionphile, told Vogue Business Founders’ Forum attendees in January. It’s “death by choice”, says Brandon Holley, chief fashion officer of AI e-commerce platform Shoptrue. Enter Maia, a new chatbot created by Sociate, an AI startup founded in 2020. Maia learns to ask the right questions to guide customers to the products they want and encourage brand discovery. Launching on Hewi today, Maia’s technology is based on the concept of curious AI and is multimodal, meaning it “sees and speaks” to gain a deeper understanding and produce better output. Unlike ChatGPT, which uses a language model and requires items to be labelled, Sociate’s tech can understand images and context, its founders say. OpenAI's new ChatGPT-4, announced yesterday, is currently testing image recognition, according to The Guardian. But, can it get over perhaps the biggest hurdle: earnings consumers’ trust? Responding to emotions Three metrics will be used to measure Maia’s success, Sociate says: engagement (keeping people on the website and searching); discoverability (how many products people are looking at); and, ultimately, conversion (making purchases). The goal is to increase engagement by 20 per cent, discoverability by 30 per cent and conversion rates by 10 per cent. The overarching objective is to make the shopping experience more user friendly. A Hewi customer might type in, I want to dress like Wednesday Adams, and Maia — which has its own digital avatar created by Unus Labs — will pull up a dark-coloured, white-collared range of dresses and outfits. Looking to writer and influencer Camille Charriere’s Instagram for inspiration? Tell Maia her name, and she’ll show you options. Sociate co-founder and CEO Yasmin Topia says she asked Maia for Andy Warhol-inspired clothes. “There was a soup can dress in the [Hewi] inventory,” she recalls gleefully. “There’s no way you would have found that otherwise.” MOST POPULAR FASHION Quiet New York and muted shows dampen fashion month’s social buzz BY LUCY MAGUIRE MARKETING Explaining luxury’s new brand identifiers BY KATI CHITRAKORN FASHION The Vogue Business Autumn/Winter 2023 size inclusivity report BY LUCY MAGUIRE ADVERTISEMENT Maia offers an experience designed to be akin to luxury shopping by creating personalised encounters Rachel Reavley... Maia offers an experience designed to be akin to luxury shopping by creating personalised encounters, Rachel Reavley, advisor to HEWI, says. Photo: Courtesy of Sociate and Hardly Ever Worn It The feature also aids new brand discovery. “I could search for a Dolce & Gabbana floral print dress and Maia would match other brands and dresses that are like that look,” explains Tatiana Wolter-Ferguson, Hewi co-founder and CEO. A host of resale sites already use AI technology to generate resale value assessments (such as Rebag’s AI appraisal index Claire); to increase backend efficiency (Thredup uses Vue.ai to process inbound products); and to support sellers to create product descriptions with ease (Shopify just announced its ChatGPT integration for this purpose). Maia’s point of difference is that it helps the consumer to shop more effectively, says Topia. Topia highlights the concept of anonymous intimacy, whereby shoppers prefer to sidestep interactions with persistent sales associates when they’re in luxury stores. Sociate has trained the AI to know at what point to ask follow-up questions and ask them in the right tone with the right words to extract the most knowledge. They’ve turned this into what Topia calls a “thin layer” of proprietary technology that can sit on other large transformer and generative AI models. MOST POPULAR FASHION Quiet New York and muted shows dampen fashion month’s social buzz BY LUCY MAGUIRE MARKETING Explaining luxury’s new brand identifiers BY KATI CHITRAKORN FASHION The Vogue Business Autumn/Winter 2023 size inclusivity report BY LUCY MAGUIRE ADVERTISEMENT Connecting AI more effectively to shoppers’ emotions could radically change the retail landscape, says Jess Hendrix, head of retail experience at Razorfish and president and CEO of Saatchi & Saatchi X, a shopping marketing agency. Maia can, for example, offer suggestions based on trend and event searches. “If resellers can link to key moments in consumers’ lives, they will amplify their overall fashion experience in powerful ways,” says Hendrix. Making the case for consumer-facing AI The technology remains a tough sell: chatbots tend to not be an enjoyable consumer experience. Half of consumers surveyed by Forrester last year said they are “often frustrated” by their experience using chatbots. Topia understands the hesitancy. “People couldn’t imagine better, smarter bots,” she says. “So, it was really hard to get people to believe that it was worth doing — and that it was possible to do.” Fashionphile’s Davis is upbeat about the backend potential of AI but among those hesitant over its consumer-facing capabilities. “Everything I’ve seen so far where the customer deals with the AI is very clunky,” she says. “Our ASP [average selling price] is around $1,700. People want a little bit of help. They want to talk to a person.” Maia is now Hewi's representation in the metaverse. Should its consumers wind up in this space down the line the... Maia is now Hewi's representation in the metaverse. Should its consumers wind up in this space down the line, the retailer will meet them where they are, Wolter-Ferguson says. Photo: Unus Labs MOST POPULAR FASHION Quiet New York and muted shows dampen fashion month’s social buzz BY LUCY MAGUIRE MARKETING Explaining luxury’s new brand identifiers BY KATI CHITRAKORN FASHION The Vogue Business Autumn/Winter 2023 size inclusivity report BY LUCY MAGUIRE ADVERTISEMENT It’s easier to test and learn when the tech is on the backend, says Forrester analyst Rowan Curran. And given that 30 per cent of customers who have a bad chatbot experience abandon their purchase, according to Forrester, companies will want to make sure the tech is up to par before releasing it to the public. Shoppers are also wary about trusting chatbots. While some 80 per cent of people want more personalisation, the majority of consumers don't expect to rely on machine recommendations beyond quick answers to simple questions, according to research by Epsilon and Deloitte respectively. “There’s a natural tension,” Holley of Shoptrue says. “So, you have to harmonise the two.” For Holley, this means adding a “human last mile”. At Shoptrue, people can add AI recommendations to “playlists” — which, in turn, trains the AI. Holley says AI should narrow down what shoppers see, just as a salesperson does, then let humans make the final pick. Hype is a double-edged sword The excitement around generative AI is resulting in “a huge diversity of ideas” — some of which are better than others, says Forrester’s Curran. With so much noise crowding the space, it can be difficult to identify the projects with long-term value-add. However, this noise has proven beneficial for Sociate. Per McKinsey & Company’s recent study, though women drive more metaverse initiatives, male-led metverse companies receive 95 per cent of funding. That Sociate was female-led and its AI was vaporware — software that’s still at the concept stage — made it a tough initial pitch. The hype around generative AI helped Sociate raise awareness, and in December 2022 it secured funding from Outlier Ventures (which partners with Farfetch on its Dream Assembly accelerator programme). Sociate expects to make £500,000 in revenue in 2023. Sociate is also part of New York Fashion Tech Lab’s (NYFTLab) 2023 cohort, meaning it’s one of six B2B women-led retail-tech startups — three of which work with AI — to participate in the organisation's 12-week mentorship programme (past participants include digital fashion company DressX, virtual store platform Obsess and digital ID startup Eon). Through this, Sociate is in conversations with retail brands to implement its B2B solutions. If resellers can link to key moments in consumers’ lives, they will amplify their overall fashion experience in powerful ways.” Amid this hype, and ChatGPT’s rollout for business consumers, Curran encourages brands and companies to look at AI’s potential beyond chatbots — as, down the line, this is not what AI will look like in most companies. “The space is more exciting than that,” he says, pointing primarily to backend solutions (though he does single out multimodal AI-driven chatbots as a consumer-facing bright spot). ChatGPT parent company OpenAI told Vogue Business shortly before its business product was announced that it “might be too early for fashion use cases” — though Shopify, for instance, is now integrating the tech at the product description-generation and chatbot levels. MOST POPULAR FASHION Quiet New York and muted shows dampen fashion month’s social buzz BY LUCY MAGUIRE MARKETING Explaining luxury’s new brand identifiers BY KATI CHITRAKORN FASHION The Vogue Business Autumn/Winter 2023 size inclusivity report BY LUCY MAGUIRE ADVERTISEMENT Over time, Maia’s capabilities will become consultative — shoppers will be able to discuss what they ought to wear when, receive tips about what will go well with the items they own and ascertain what pieces they should buy to elevate their wardrobe. Hewi’s Wolter-Ferguson says the further potential includes authentication, the listing of products and the negotiation of prices. It’s still early days. Wary companies are waiting for the right opportunity to offer AI to consumers. This waiting game is nothing new. Hewi advisor Rachel Reavley recalls her time at Net-a-Porter in the early days of e-commerce. “For eight years [it] was a battle getting people to understand that you could sell luxury online,” she says. At this early stage, it’s all about asking the right questions, rather like Maia — knowing what you know and what you need to find out. “You want to ride the wave,” Wolter-Ferguson says, “and be the one that’s front-crawling out.” Key takeaway: AI startup Sociate is partnering with resale platform Hewi to make the secondhand luxury shopping experience more user-friendly. Via its generative, multimodal AI model, a new chatbot called Maia has the potential to improve the image of chatbots by offering curated suggestions based on trend and event searches, responding more effectively to shoppers’ queries and acting as a personal consultant. |
Consumer Discretionary | Consumer Discretionary: Textiles, Apparel & Luxury Goods | Apparel and Personal Accessories | view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
03/13/2023 |
Added more notes on AI predictions to the tool "Web3 Forward Guidance" with sub-section title of Addition 3/13/23. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
03/12/2023 |
FTI AI trends notes |
APPLIED AI FOR HR Recognition systems can now be deployed to watch people in an interview and gauge enthu - siasm, tenacity, and poise. Algorithms analyze hundreds of details, such as their tone of voice, facial expressions, and mannerisms to best pre - dict how a candidate will fit in with the culture of a community. Startups such as HireVue use AI systems to help companies decide which candidates to hire. But this kind of recognition technology has practical applications beyond job interviews: It can detect when someone is likely to make a purchase—or attempt to shoplift—in a store, whether someone is lying, and whether some - one is receptive to new suggestions and ideas. Unlike security cameras, which tend to have a light indicating they’re recording, algorithms work invisibly, which means that this is an area that could face regulatory scrutiny. The con - sumer advocacy organization Electronic Privacy Information Center filed a complaint with the US Federal Trade Commission requesting an inves - tigation into HireVue, alleging its tools produce results that are “biased, unprovable, and not replicable” through algorithmic models. DETECTING EMOTION A new type of neural network can determine how people are feeling. Using radio waves, AI can detect subtle changes in heart rhythms, run a pattern analysis, and predict someone’s emotional state in a given moment. A team from Queen Mary University of London used a transmitting radio antenna to bounce radio waves off test subjects and trained a neural net to detect fear, disgust, joy, and relaxation, as people were shown different videos. The system accurately tagged emotional states 71% of the time, which signals new opportunities for health and wellness applications, as well as for job interviews and the government/military intelligence community GENERATIVE AI FOR PERSONAL EXPRESSION In 2022, experimental apps moved quickly into the mainstream, as consumers enthusiastically generated AI avatars of themselves. Lensa AI, a photo-editing app that creates avatars automatically by using a handful of photos, became one of the most downloaded apps in the world. It produced dozens of images depicting a variety of scenes and styles, such as an anime hero, a Renaissance-era princess, or a modern artist. Amper Music creates and mixes tracks using parameters set by the user, while OpenAI’s MuseNet launched a tool that can generate songs with up to 10 different instruments and music in up to 15 different styles. It’ll also mimic a famous artist, such as Mozart. Generative AI apps haven’t been without controversy: Lensa AI tends to oversexualize women, while OpenAI’s generator also samples from modern composers who don’t receive any royalties. (We used it to re-create Luis Fonsi’s “Despacito” in the style of Russian-American composer Rachmaninoff, which sounded just as weird as you’re imagining.) SIMULATING EMPATHY AND EMOTION AI can now measure biomarkers that suggest a person’s emotional state, such as agitation, sadness, or giddiness. Precisely detecting human emotion is challenging, but companies with a large enough data set are developing accurate models. Amazon’s Rekognition API infers someone’s emotions using facial recognition and physical appearance. (Though, Amazon is quick to point out in its documentation that the “API is only making a determination of the physical appearance of a person’s face. It is not a determination of the person’s internal emotional state and should not be used in such a way.”) Replika uses AI to evaluate voice and text, and over time the personal AI bot mirrors the user in “conversations.” Affectiva Human Perception AI analyzes complex human states with speech analytics, computer vision, and deep learning. For example, the automotive sector uses Affectiva’s technology to detect a driver’s emotional state—such as sleepiness or road rage—and the program can make real-time suggestions to improve the driver’s performance. (Affectiva was acquired by Smart Eye in 2021 but has kept BIOMETRIC SCORING Quantifying and analyzing our biometric data can reveal patterns in our activities and a lot about who we are, what we’re thinking, and what we will likely do next. Behavioral biometrics use machine learning to understand hundreds of unique biometric data points to understand, authenticate, nudge, reward, and punish. Behavioral biometrics tools can be used to map and measure how a user types—what force is used to press down on screens, whether the Cs and Vs on a phone are tapped with a fat finger, and how quickly fingers flick when hunting through search results. Those tools know a user’s unique typing pattern on a physical keyboard, too—whether someone constantly spells the word “behavioral” wrong on the first try, or holds down or repeatedly taps on the delete button. Most of us are unaware that we have certain identifiable behaviors, but machines perceive them. In the near future, such patterns will pose security vulnerabilities—as well as interesting new opportunities. Imagine never having to use a password again; your bank would simply recognize a customer’s typing pattern after a few sentences. The downside: If behavior is observable, at some point it will become repeatable, too, which represents a security risk. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
02/20/2023 |
What is AlgoveraAI: AlgoveraAI is a decentralized AI assistant for organizations. By connecting personal data for web2 and web3 sources, Algovera’s AI model will create personalized AI assistants |
|
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
02/13/2023 | Shopify to integrate AI for storefront customisation |
Shopify to integrate AI for storefront customisation Shopify is adding OpenAI’s ChatGPT to its suite of merchant tools to make writing product descriptions more quick and efficient. According to merchant feedback, the most time-consuming activity when launching a new store is storefront copy creation. Conversely, product descriptions are key to a storefront’s success. The aim is to take the burden of copy creation away from merchants, not only to speed up the process of launching a store, but to help develop a consistent — and effective — brand voice in the process. Using ChatGPT, merchants can input key words about their products, and the tool, GBP-3, will do the rest of the legwork producing product descriptions. Details about the feature's North America roll out will be available in the following months. “With Shopify’s winter 2023 edition, we’re showcasing tools that give our merchants access to cutting edge technology and an unfair advantage in commerce,” says Miqdad Jaffer, director of product at Shopify. “This includes the biggest upgrades we’ve ever made to the Shop app, an AI tool designed to write high-quality, compelling product descriptions, powered by Shopify Magic and enhancements to a merchant’s most important feature, checkout.” While much has been made of ChatGPT’s impact on the creative side, namely the art world and digital fashion space, this recent move reflects increasing brand and company interest in the ways the tech might improve fashion’s backend operations. Product descriptions, alongside business plans, are a key use case, Journey’s Brandon Kaplan recently told Vogue Business. DressX has already been using AI for this purpose. By integrating it into Shopify’s own software, it’s a less daunting approach than simply encouraging merchants — who may have no experience with AI — to try out ChatGPT (or a similar offering) on their own. As use cases become more concrete and normalised, and the tech continues to improve, Shopify says it plans to integrate more AI into its platform. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
02/13/2023 |
Is This the Trend Report of the Future? An AI Interprets the Fall 2023 Menswear Season. The output were fashion runway images with models showcasing fashion trends in clothing items created by the artificial intelligence. Rather than people interpreting trends, let AI do it, which will be able to process a lot more data than humans and therefore, do a better job. |
Is This the Trend Report of the Future? An AI Interprets the Fall 2023 Menswear Season BY JOSÉ CRIALES-UNZUETA ART BY ZAK KREVITT February 6, 2023 Late last year, I visited Outlier designer Willie Norris at her Brooklyn studio. After she walked me through her collection, she told me she had something cool to show me. Like any decent reporter—and curious person—I was more than down to check it out. Turns out Norris had partnered with the artist Zak Krevitt to interpret the images for her spring 2023 lookbook through the lens of AI—yes, artificial intelligence. Krevitt turned Norris’s models into fantastical otherworldly beings: Aliens walking down a runway in menswear of fantastical proportions—it was as if a Vogue Runway lookbook and a surrealist painting were put in a blender. A few weeks later, I connected with Krevitt on Instagram. Norris had produced blankets of these AI illustrations, and as I shared one online Krevitt popped up in my DMs to claim his artwork—as he should! We started chatting. Like many, I had seen all the controversy about AI online, but was curious about how it worked. Krevitt explained many things I’m unable to eloquently spell out here: Codes, custom AI models, image databases, the math in data science. What I took from our conversation is that AI, as scary as it sounds—everything from Scarlett Johansson in Her to Wall-E to I, Robot come to mind—is an opportunity to process images and data. It’s here whether we like it or not, so why not try to understand it? Meet Vogue China, Editor-In-Chief Here’s what we did: Krevitt and I spoke when the men’s shows were getting under way, and we saw an opportunity to create an AI trend report of sorts. As I do every menswear season, I started tracking trends from the moment the first look at the first show walked down the runway, and once the Runway team and I had landed on a finalized list, Krevitt fed the model he created all of our images: Going out tops, sheaths, hefty jackets, new athleisure, reimagined office wear… RELATED VIDEO Take a Trip Through the Maximalist World of Jeremy Scott Collector Joey Arias My Vogue Runway report features nine trends that dominated the fall 2023 menswear season; what Krevitt generated are 11 looks that synthesize the collections. Yes, 11 entirely new looks created through AI based on all the data we collected from the menswear season. ADVERTISEMENT “There’s a misconception that AI art and AI image generation is a collage that takes parts of existing images and places them somewhere else,” Krevitt says. “But that’s not what is happening. What we did is that, with a custom model, we taught the AI a new thing. The first step is a blurry noise pattern, and each step draws little connections between those pixels in a process that the AI thinks might be right.” To put in plainly: Krevitt created a custom model based on the runway collections and our ideal, photorealistic output. “All of the pixels that are drawn are totally novel based on all the information that you give it,” he adds. To put it even more plainly: “It’s like if I went to all of these fashion shows and stared at each model coming down the runway really hard, and then a week later I took pen and pencil and drew them from memory,” surely the drawings wouldn’t look like the collections, but they’d encapsulate what they’re about: Silhouettes, proportions, fabrications, drape. ADVERTISEMENT Krevitt understands the questions around AI—and the collective anxiety. But trust, this is not a replacement of a trend report, nor is it an attempt to supplant anyone’s editorial eye with an AI. “What we’ve done together is more an exploration of data science than it is necessarily fashion,” he says. If there’s anything a fashion watcher can tell you, it’s that there is nothing like the real thing—that’s what keeps us coming back to the shows each season. “We’ve taken data points from the runway and essentially put them into a mathematical model and said, hey, let’s explore this,” Krevitt adds. “But of course it’s not going to have the same soul and heart and spirit of something that was cut and sewn and then worn by a person.” In my opinion, nothing ever will. |
Consumer Discretionary | Consumer Discretionary: Textiles, Apparel & Luxury Goods | Apparel and Personal Accessories | view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
02/13/2023 |
AI tools landscape - probably missing many more, covering areas including text, video, image, code, speech, 3D, music, gaming, avatars, science, robotic process automation |
|
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
02/09/2023 | Here are 5 prompts you can steal to 10x your email copywriting results 👇 |
Here are 5 prompts you can steal to 10x your email copywriting results 👇 1. “Please write an email marketing campaign using the 'Expectation-Surprise' framework to generate interest and encourage action from [ideal customer persona]. Set expectations for the reader about what they can expect from our [product/service], but then surprise them with unexpected benefits or features that exceed those expectations. Include talking points such as [unique selling point], [pain point], and [desired action]." 2. “Write an email marketing campaign using the 'Exclusive-Inclusive' framework to position our [product/service] as elite and desirable to [ideal customer persona]. Make it clear that our product is exclusive or elite in some way, but also emphasize that it is accessible and inclusive to a wide range of customers. Include talking points such as [unique selling point], [pain point], and [desired action]." 3. “Using the 'Positive-Negative' framework, please write an email marketing campaign that focuses on the positive aspects of our [product/service] and the benefits it will bring to [ideal customer persona]. Also acknowledge and address any potential negative consequences or drawbacks in a constructive way. Include talking points such as [unique selling point], [pain point], and [desired action]." 3.5: This is the 4th post I’ve done in my ChatGPT series. The first 3 posts have been seen by over 2 million people. Make sure you follow me Chase Dimond so you don’t miss cutting edge marketing tips like this. 4. “Write an email marketing campaign using the 'Past-Present-Future' framework to connect our [product/service] to [ideal customer persona]'s past experiences or memories. Show how it can improve their present situation, and then show how it can shape their future in a positive way. Include talking points such as [unique selling point], [pain point], and [desired action]." 5. “Craft an email marketing campaign using the 'Friend-Expert' framework to establish a connection with [ideal customer persona] and position our brand or [product/service] as an expert in our field. Use a friendly and approachable tone to connect with the reader, but also highlight our credibility and expertise in our field. Include talking points such as [unique selling point], [pain point], and [desired action]." |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
02/08/2023 |
Key takeaway: Zero10 is moving forward with a business strategy that is twofold: partnering with traditional fashion brands to offer B2B solutions for AR try-on and offering emerging designers, individual creators and other users open access to 3D and AR fashion design tools. The former is how Zero10 generates most of its revenue, much of which is invested in its AR Fashion Platform to achieve the latter. Custom Shop is the next step in achieving the goal of making AR fashion accessible, by integrating generative AI so that “any kid” can produce AR fashion. |
COMPANIES Zero10 wants to crack open 3D and AR fashion design for the masses After launching its AR Fashion Platform in November, Zero10 is gearing up for its newest edition: Custom Shop, an AI-enabled feature that helps users create AR fashion. It’s another step in proving the value of AR for fashion brands. BY MADELEINE SCHULZ February 8, 2023 Zero10 wants to crack open 3D and AR fashion design for the masses Photo: Zero10 To receive the Vogue Business newsletter, sign up here. George Yashin was working on his brand ZNY in 2019 when he saw The Fabricant’s Iridescence dress — the first recorded sale of a fashion NFT — and something clicked. He describes the moment as akin to Malevich’s Black Square, regarded by many artists as “the Hour Zero of modern art”, but for fashion. “When The Fabricant dropped this digital dress, a new era began,” he says. Yashin was feeling burnt out from fashion’s breakneck pace — he was unable to finish ZNY’s Autumn/Winter 2020 collection for its 2019 Paris Fashion Week presentation. So the team brought "consciously unfinished" items to Paris, and called the collection "Unready to Wear". He clocked then that fashion was moving at an unsustainable speed. “When I came back from Paris, I started to think about what technologies could [mitigate] the problem.” Yashin began testing new technologies, first by photoshopping digital items onto photos — which, he quickly realised, was unscalable — so he turned to AR, and founded his company Zero10 in 2020. He sought out an engineer and suggested real-time AR try-on to them; their reaction, Yashin says, was along the lines of: “You’re crazy. But I can do it.” Zero10 offers tools such as an AR mirror, where customers can try on garments virtually in-store — or in front of a billboard or storefront window — and a software development kit which can be integrated into brands’ own applications to offer AR try-on. It has so far collaborated with brands including Edward Crutchley, Maisie Wilen, Barragán and Alexandra Sipa. More recently, Zero10 opened up its services to help fulfil Yashin’s vision for a more democratised fashion industry. In November, the company launched its iOS app, AR Fashion Platform, to let designers and creators make their 3D designs AR compatible. This spring, it will debut a new feature: Custom Shop. Using generative AI, Custom Shop makes it possible for users — even those lacking the specific design skills and experience — to create digital designs to wear in AR. On the extent to which the wider fashion community is ready to embrace AR fashion Yashin says that “there is interest... On the extent to which the wider fashion community is ready to embrace AR fashion, Yashin says that “there is interest from the masses, however they are awaiting confirmation to their hypothesis and clarification to their questions and uncertainty about the value of these products and technologies.” Photos: Zero10 MOST POPULAR FASHION Daniel Lee’s vision for Burberry is an ode to Britishness BY KATI CHITRAKORN MARKETING Tiffany wants Gen Z. Is it moving too fast? BY KATI CHITRAKORN FASHION Adidas’s new Gen Z, fashion-forward line is its biggest launch in 50 years BY LUCY MAGUIRE The goal is to make AR technology easier to access, by placing it in the hands of emerging designers and individual creators. “Similar to how designers of physical clothing must have the knowledge and applied skills on tailoring and constructing technologies, designers of digital clothing must understand how technologies work, [and] have the skills and know-how to bring their creations to life,” Yashin says. By making creation easier via Custom Shop, Yashin hopes to build a community of professionals to participate in what he believes is fashion’s next stage. “2023 will be a big year for AR,” he says. Zero10 has stiff competition. Snap is one of the leaders in the space, having boosted AR try-on tools for companies including Farfetch and Prada in 2021, before offering these up to other brands and companies in an effort to expand AR adoption and development outside of the Snapchat app. Major brands have dabbled. In June 2022, Burberry offered an AR shopping experience on its website (with Snap Inc.’s Vertebrae). Roksanda's NFT, which dropped in February last year, was available for AR try-on via an Instagram filter. And, that same month, Bottega Veneta launched an AR app to tease Matthieu Blazy's first show. Digital-native fashion companies such as DressX and Rtfkt have also long been testing and offering AR try-on. The interest is there — Zero10 has raised $6 million in funding to date, and expects to raise another $3 million this year as part of its ongoing seed round. Its primary source of income is its B2B solutions for brands, with 70 per cent or revenue coming from its AR mirror, 20 per cent from the software development kit and 10 per cent from brand integration to the Zero10 app. Its projected revenue for this year is $1.5 million, based on preliminary estimations and upcoming brand collaborations. The company is looking forward in two directions: partnering with major players, and assessing how best to onboard more emerging designers to the tech. AI design for all? Zero10 launched its platform, open to independent creators and small brands, to increase AR accessibility and generate more content. Both brands and individual creators can upload their designs to the Zero10 app. The Zero10 3D team then takes the files and makes them AR-compatible so that a designer can release a digital fashion collection in the app. Through this, a brand can link to its website and give customers try-on options before purchasing. Soon, Zero10 will introduce in-platform monetisation opportunities. MOST POPULAR FASHION Daniel Lee’s vision for Burberry is an ode to Britishness BY KATI CHITRAKORN MARKETING Tiffany wants Gen Z. Is it moving too fast? BY KATI CHITRAKORN FASHION Adidas’s new Gen Z, fashion-forward line is its biggest launch in 50 years BY LUCY MAGUIRE Consumers can browse Zero10 for digital clothing and choose to either place the garment on an existing photo or to... Consumers can browse Zero10 for digital clothing and choose to either place the garment on an existing photo or to overlay it in real time. Photo: Zero10 Zero10 wants to support 3D creators — but knows that there’s a limited number of them. Because of this, it wants to help more people break into the space — which is where Custom Shop comes into play. “We want to make this kind of self-expression available for everyone,” Yashin says. Zero10’s Custom Shop will allow users to become creators. The first step is to create AR-ready items with AI tech (using Stability AI, users can input words to generate garments); the next is for users to try on and share the content. “I feel that we could see the new Margiela in the digital world,” Yashin says. “He could come not even from the creator community, but the user community. It’s an open tool for every kid to use to create digital fashion.” Methods for creating digital, AR-wearable fashion are becoming much easier, says Matthew Drinkwater, head of the London College of Fashion’s Fashion Innovation Agency. “When you look at Lens Studio and SparkAI, we’re beginning to make this far more accessible to a much wider audience.” Even in the first month of 2023, there’s been a surge of interest in generative AI, particularly given ChatGPT’s hype. MOST POPULAR FASHION Daniel Lee’s vision for Burberry is an ode to Britishness BY KATI CHITRAKORN MARKETING Tiffany wants Gen Z. Is it moving too fast? BY KATI CHITRAKORN FASHION Adidas’s new Gen Z, fashion-forward line is its biggest launch in 50 years BY LUCY MAGUIRE This does, for some, engender a sense of angst — what does it mean for creatives? However, there’s still a level of human expertise required. It’s a tool set, says Drinkwater, who expects that we’ll start to see a hierarchy of who excels in its use, as new specialisms and skillsets develop. “This isn't an unskilled person,” Rupert Breheny, Google technical specialist in AI augmented creativity, recently told Vogue Business. “That person has to know the industry inside out, and they know how to express ideas in a way that is meaningful for the art.” However, it is a massive time-saver, and enables the creation of imagery (and fashion) that isn’t possible in the physical world. Yashin is grateful for the collaborations in that they garnered media attention and provided learnings to improve... Yashin is grateful for the collaborations, in that they garnered media attention and provided learnings to improve internal workings based on partners' feedback. Photos: Zero10 Edward Crutchley and Maisie Wilen designer Maisie Schloss and both emphasise the physical impossibility of their Zero10 collaboration designs. Crutchley explored ideas that could never exist in the “real world”, such as crystalline structures and lightning effects. Though Schloss found limitations (she couldn’t make a bodycon dress), she leaned into animated garments and colour-changing fabrics. Zero10 is investing in emerging designers and creators, and offering tutorials and workshops to onboard those interested because building up that community is as important as the platform itself, Yashin says. “We believe that creators are the new fuel for the fashion industry and they will help to lower the entry barrier.” Proof of concept Though brands have shown interest, it wasn’t all smooth sailing. Zero10 is now fielding requests from big names, but at the beginning, it was the one approaching — and sometimes paying — designers to partner on collaborations. “We needed proof more than the brands needed projects with us,” Yashin says matter-of-factly. He wanted to speed up the process of proving Zero10’s value, so he allocated a budget to have partners to participate in the project on a commercial basis. “It helped tremendously,” Yashin says. Costs varied from $5,000 to $25,000. Now, the two sides of Zero10’s business — the brand opportunity side and customer-facing side — coexist. By proving the concept, Zero10 showed brands why they ought to get onboard, and in turn, can use funds raised via brand collaborations and solutions to invest back into the platform to educate and onboard those interested. MOST POPULAR FASHION Daniel Lee’s vision for Burberry is an ode to Britishness BY KATI CHITRAKORN MARKETING Tiffany wants Gen Z. Is it moving too fast? BY KATI CHITRAKORN FASHION Adidas’s new Gen Z, fashion-forward line is its biggest launch in 50 years BY LUCY MAGUIRE “The first year when we came out with the app, it was super hard,” Yashin says. Fashion was resistant. Zero10 sought out brands with the same passion about new technologies. Both Schloss and Crutchley had already been experimenting with digital design; Schloss designs and manipulates digitally, while Crutchley had done a 3D design partnership before. “The clothes I design do lend themselves to AR both thematically (the work regularly explores contemporary digital culture) and technically (digital assets were ready to share with Zero10's team),” Schloss says. Zero10 and Crosby Studios' NYFW pop up invited fashion week attendees to experiment with AR fashion. Zero10 and Crosby Studios' NYFW pop up invited fashion week attendees to experiment with AR fashion. Photos: Zero10 Yashin credits Zero10’s NYFW collaboration with Crosby Studios as its main project. At NYFW, the two hosted a physical pop-up store. The hook? All items were digital-only. Customers tried on virtual garments in store via QR codes, with the option to purchase the digital goods or share to social media. “Everybody saw that it works,” he says. It was a big investment, and it paid off. Visitors spent an average of 30 minutes in the space, and there were over 25,000 digital try-ons via the Zero10 app. AR integration is inevitable, says Drinkwater. The perks of 3D imaging for brands are long proven at the Fashion Innovation Agency. “If you have a 3D asset on your website, you’re looking at a click-through rate anywhere between 20 to 40 per cent higher than a flat 2D image,” he says. “Your conversion rate is higher, your return rate is lower.” Layer AR on top of that, and you can give existing products added dimension. “You can build a much more layered experience for consumers through AR, whether that be through storytelling or more experiential events.” MOST POPULAR FASHION Daniel Lee’s vision for Burberry is an ode to Britishness BY KATI CHITRAKORN MARKETING Tiffany wants Gen Z. Is it moving too fast? BY KATI CHITRAKORN FASHION Adidas’s new Gen Z, fashion-forward line is its biggest launch in 50 years BY LUCY MAGUIRE It also offers a way to access brands with a high price point. Sipa’s handmade recycled plastic lace garments are expensive and time-consuming to produce. “They are all one of a kind and some too expensive to sell, like the ruffle coat that Zero10 reproduced,” she says. Sipa was excited to see people having fun with her designs and sharing them on social media. For Crutchley, there’s value in that it enables small brands to reach new customers who don’t currently have access to his products, “but could easily become future purchasers”. AR also presents interesting growth opportunities for brands. As Sipa grows her brand, she’s shifting away from one-of-a-kind recycled pieces to “more sustainable commercial fashion design”. She views AR as a means for her wire pieces to exist alongside the regular wholesale business. Sipa thinks of the Zero10 collab as away to make extravagant garments accessible to more people. “I still get almost... Sipa thinks of the Zero10 collab as away to make extravagant garments accessible to more people. “I [still] get almost daily notifications of people tagging us in photos with the AR clothes,” she says. Photos: Annemarie Sterian, courtesy of Alexandra Sipa Bigger brands are opting in. Coach will integrate Zero10’s proprietary technology into its retail strategy this spring. Zero10 is also offering three AR items to Private Policy’s NYFW show attendees. Guests will be able to virtually try on these designs right off the runway — an AR approach to Carolina Herrera’s immediately available Roblox dress (which sold out and spiked in value). MOST POPULAR FASHION Daniel Lee’s vision for Burberry is an ode to Britishness BY KATI CHITRAKORN MARKETING Tiffany wants Gen Z. Is it moving too fast? BY KATI CHITRAKORN FASHION Adidas’s new Gen Z, fashion-forward line is its biggest launch in 50 years BY LUCY MAGUIRE Zero10’s Custom Shop feature has also piqued brands’ interest. Explaining why, Yashin nods to Levi’s, Nike and Tommy Hilfiger, all of whom offer or have offered physical customisation. “You can make your custom shop available to try on before purchase,” he offers as an example. “It’s not only fun for users. It’s quite a cool tool for businesses.” AR will change how we engage with fashion, Drinkwater says, by bringing audiences into brand storytelling narratives. Plus, beyond financial and business operation benefits, “there’s an element of joy to that experience which is sometimes forgotten,” he says. This is key when thinking about building an emotional reaction in consumers, which is so often a reason for making a purchase. “It’s just not a typical KPI for brands,” Drinkwater says. |
Consumer Discretionary | Consumer Discretionary: Textiles, Apparel & Luxury Goods | Apparel and Personal Accessories | view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
02/02/2023 |
ARKBig Ideas 2023 - AI notes |
- AI should increase the productivity of knowledge workers more than 4-fold by 2030. If so, then invest in the industries where the knowledge workers will leverage AI the most. Creative. Text, images, product, copy, products, coding - Cost To Train GPT-3 Level Performance - now at 450K, to go to 30$ by 2030 - In a world of low-cost compute, data will become the primary constraint. Proprietary Data Could Create Moats - what is that? 0 - AI Could Lead To A 10-Fold Increase In Coding Productivity - if so then software is not the advantage. Its data, distribution, brand. - Cost Declines Should Enable Mass Adoption Of Sophisticated AI Chatbots by OpenAI in November 2022, ChatGPT uses large-language models to generate text in response to user prompts. Within five days of launch, the number of users reached 1 million. While the inference costs to run the model at scale are ~$.01 per query, Wright’s Law suggests that by 2030 ChatGPT-style applications will be deployable at the scale of Google Search, processing 8.5 billion searches per day. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/29/2023 |
AI Tools - see notes |
1. Krisp: Krisp's AI removes background voices, noises, and echo from your calls, giving you peace of call Link: https://krisp.ai/ 2. Beatoven: Create unique royalty-free music that elevates your story Link: https://www.beatoven.ai/ 3. Cleanvoice: Automatically edit your podcast episodes Link: https://cleanvoice.ai/ 4. Podcastle: Studio quality recording, right from your computer Link: https://podcastle.ai/ 5. Flair: Design branded content in a flash Link: https://flair.ai/ 6. Illustroke: Create killer vector images from text prompts Link: https://illustroke.com/ 7. Patterned: Generate the exact patterns you need for and design Link: https://www.patterned.ai/ 8. Stockimg: Generate the perfect stock photo you need, every time Link: https://stockimg.ai/ 9. Copy: AI Generated copy, that actually increases conversion Link:https://www.copy.ai/ 10. CopyMonkey: Create Amazon listings in seconds Link: http://copymonkey.ai/ 11. Ocoya: Create and schedule social media content 10x faster Link: https://www.ocoya.com/ 12. Unbounce Smart Copy: Write high-performing cold emails at scale Link: https://unbounce.com/ 13. Vidyo: Make short-form vids from long-form content in just a few clicks Link: https://vidyo.ai/ 14. Maverick: Generate personalized videos at scale Link:https://lnkd.in/dmrkz_ah 15. Quickchat: AI chatbots that automate customer service charts Link: https://www.quickchat.ai/ 16. Puzzle: Build an AI-powered knowledge base for your team and customers Link: https://www.puzzlelabs.ai/ 17. Soundraw: Stop searching for the song you need. Create it. Link: https://soundraw.io/ 18. Cleanup: Remove any wanted object, defect, people, or text from your pictures in seconds Link: https://cleanup.pictures/ 19. Resumeworded: Improve your resume and LinkedIn profile Link: https://lnkd.in/d9EurcnX 20. Looka: Design your own beautiful brand Link: https://looka.com/ 21. theresanaiforthat: Comprehensive database of AIs available for every task Link: https://lnkd.in/dKhqaaF3 22. Synthesia: Create AI videos by simply typing in text. Link: https://www.synthesia.io/ 23. descript: New way to make video and podcasts Link: https://lnkd.in/d_Kdj35E 24. Otter: Capture and share insights from your meetings Link: https://otter.ai/ 25. Inkforall: AI content (Generation, Optimization, Performance) Link: https://inkforall.com/ 26. Thundercontent: Generate Content with AI Link: https://lnkd.in/djFxMZsZ |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/23/2023 |
The AI Entrepreneur: The Artificial Intelligence Blueprint for Marketers |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/19/2023 |
Enrich company lists with OpenAI GPT-3 |
Set-up
|
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/18/2023 |
GPT for Sheets™ Like ChatGPT but from Google Sheets. Get help with inspiration, categorization, translation, correction, summarization with the GPT function. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/17/2023 |
Tribescaler is a generative text AI focused on marketing copy and more specifically for lead generators and hooks |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/17/2023 |
ChatGPT for Search Engines - chrome extension Whenever you type a search in Google, Bing, or Duck Duck Go you’ll have a window embedded in that search engine that shows the same results you would get for ChatGPT. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/17/2023 |
WebChatGPT - chrome extension Automatically adds references from the Web to give your prompts with ChatGPT more data so that you can receive incredible results with references. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/17/2023 |
Replai.so Your Twitter and Linkedin just leveled up. This tool will create responses FOR YOU with just the click of a button directly on Twitter and Linkedin… |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/14/2023 |
Microsoft's VALL-E can imitate any voice with just a three-second sample. Artificial intelligence can replicate any voice, including the emotions and tone of a speaker. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/12/2023 |
Behavioral biometrics: quantifying and analyzing our biometric data and extending to soulbound NFTs. |
It can reveal patterns in our activities—and as a result reveal a lot about who we are, what we’re thinking, and what we are likely to do next. If what we do, buy, place we go, etc. can be digitized as NFTs, what are the implications of using AI and behavioral biometrics against this data? |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/12/2023 |
ChatGPT prompts source (paid) |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/11/2023 | AI’s revival raises questions for fashion’s creative class |
AI’s revival raises questions for fashion’s creative class Generative art can streamline and stimulate the creative design process, prompting new opportunities and new conversations between tech enthusiasts and traditionalists. BY MAGHAN MCDOWELL January 10, 2023 AIs revival raises questions for fashions creative class Photo: Rupert Breheny This article on AI and generative art in fashion is part of our Vogue Business Membership package. To enjoy unlimited access to Member-only reporting and insights, our NFT Tracker and TikTok Trend Tracker, weekly Technology and Sustainability Edits and exclusive event invitations, sign up for Membership here. Late last year, a deluge of artistic-looking portraits flooded social media, credited to photo-editing apps including Lensa and Reface. People shared celeb-style illustrations of themselves as swashbuckling superheroes, romantic fairy princesses and stars in their own cosmic fantasy tales. The stylised images are a mainstream example of how artificial intelligence can be used to create new imagery from user-supplied inputs. AI-generated art, referred to as generative art, is also the way that creators of NFT collections World of Women, Bored Ape Yacht Club and Cryptopunks created thousands of variations from a limited number of supplied traits. The global generative AI market size is projected to grow in size from $7.9 billion in 2021 to $110.8 billion by 2030, reports Acumen Research and Consulting; with creative industries including entertainment, fashion and marketing most likely to feel the impact first, says the MIT Technology Review. Now, the AI revival is coming for fashion, offering new tools to harness creativity and a renewed conversation about the role of human ingenuity in the creative process. Fashion photographer Nick Knight used it to help generate thousands of editorial photoshoot-style NFTs that dropped in December, while independent creators are already remixing, resulting in bejewelled Nikes and Balenciaga bags made of melting leather. Supply chain company Cala, which started as a “plug-and-play” tool for influencers, has expanded into AI-generated designs, enabling people to generate new visual designs from text or uploaded reference images. Digital fashion platform DressX recently used AI to help design a collection of digital wearable bucket hats, borrowing from the model used to generate thousands of PFP variations. After attracting attention for their work featuring Nike logos in Renaissance artworks the artist who goes by... After attracting attention for their work featuring Nike logos in Renaissance artworks, the artist who goes by Str4ngeThing used AI to generate this conceptual Nike-inspired women’s architectural cape tracksuit that references colours and architecture of the Renaissance. Photo: Str4ngeThing MOST POPULAR FASHION Super surprise: Why Louis Vuitton chose KidSuper to co-create next menswear collection BY LAURE GUILBAULT COMPANIES Dior CEO Beccari to lead Louis Vuitton in LVMH management shuffle BY LAURE GUILBAULT BEAUTY Estée Lauder’s new app helps visually impaired users apply makeup BY MALIHA SHOAIB “I never thought I'd see something like this in my lifetime,” says Rupert Breheny, a Google technical specialist in AI augmented creativity, who specialises in fields such as extended reality and fashion. “It started as a toy for me, but you start thinking about, okay, how can I use it in my work? How can I elevate the stuff that I do on a daily basis, or perhaps more excitingly, move into territory that I was locked out of before? There's a lot of gatekeeping in the art industry, and I would include fashion in that.” Generative design is part of a larger AI renaissance fuelled by the sophistication of and access to new generative art tools. In recent months, tools including Dall-e 2, Midjourney and Stable Diffusion have become available to the general public, enabling various approaches to generating new images using AI, including “text-to-image” models, meaning the ability to turn text-based prompts into images and videos, and the ability to generate new variations of user-supplied imagery. It’s a uniquely tangible technology for creatives. While recommendation engines that populate e-commerce pages or identify gaps in a product assortment are relatively invisible to the average user, the results of generative art are immediately obvious, and meant to be shared and consumed. This presents an exciting opportunity. One might, for example, create a fashion photoshoot in which someone is eating McDonald’s and wearing a Nike hoodie in the style of Renaissance painter Jan van Eyck. Or build out pieces in a collection by inputting just a few key silhouettes, colours and fabrics. Or use it to generate a designer collaboration posthumously, or one between those who are rivals in the material world. It can be used to combat “the fear of blank pages”, says Reface co-founder and CEO Dima Shvets. Designers might start with absurd prompts, or use it to create tributes to specific designers or periods, he says. The results are immediate and don’t require the monetary or time investments typically required to, for example, create a fashion illustration or commandeer an editorial photoshoot. For this reason, many see the tech as a way to foster inspiration and visualise new ideas, rather than replace them altogether. One still needs to engineer the prompt, after all. MOST POPULAR FASHION Super surprise: Why Louis Vuitton chose KidSuper to co-create next menswear collection BY LAURE GUILBAULT COMPANIES Dior CEO Beccari to lead Louis Vuitton in LVMH management shuffle BY LAURE GUILBAULT BEAUTY Estée Lauder’s new app helps visually impaired users apply makeup BY MALIHA SHOAIB These images created by researcher Rupert Breheny using Midjourney illustrate images that previously didn't exist or... These images created by researcher Rupert Breheny using Midjourney illustrate images that previously didn't exist, or would be challenging to create. Many creatives like Breheny provoke the laws of physics or explore historical juxtaposition. Photos: Rupert Breheny “This isn't an unskilled person,” Google’s Breheny says. “That person has to know the industry inside out, and they know how to express ideas in a way that is meaningful for the art, [such as] movie storyboards or editorial images. But they can basically do in minutes what used to take hours.” It might also make sense to create imagery that is not really possible physically, he adds, such as an underwater or interstellar fashion shoot. It also invites questions about intellectual property, artistic licence and divine inspiration, threatening to deepen the divide between techies and traditionalists. For example, if an AI-designed shoe goes viral, who deserves the credit: The computer, or the creative director? If someone remixes a McQueen design, who “owns” the new style? Tech utopians are inclined to see it as an incredible leveller, says Breheny. However, those who spent years honing their craft in school and in ateliers may be spooked. Artistic fields were “the last thing that were supposed to be replaced with the AI,” he says. “This should not have happened.” MOST POPULAR FASHION Super surprise: Why Louis Vuitton chose KidSuper to co-create next menswear collection BY LAURE GUILBAULT COMPANIES Dior CEO Beccari to lead Louis Vuitton in LVMH management shuffle BY LAURE GUILBAULT BEAUTY Estée Lauder’s new app helps visually impaired users apply makeup BY MALIHA SHOAIB These images were created using DallE with the prompt “a highquality editorial photograph of a fashion designer using a... These images were created using Dall-E with the prompt, “a high-quality editorial photograph of a fashion designer using a generative AI art tool in their studio to design fashionable clothing, in the style of Vogue”. Photos: Vogue Business, using Dall-E The fashion use-case The practical fashion uses of generative art are still largely hypothetical. There are a few early examples. In late November of 2022, DressX dropped 444 digital bucket hats that were designed in part using generative AI. The team started by identifying a few key elements that were signatures to DressX, such as chains, sparkles and flowers, in addition to a few traits from other communities. They then established what tiers of “rare” combinations they desired — meaning that some of the final NFT designs would be more scarce, and thus hopefully more valuable. They were able to preview and manually tweak the generated results before ultimately hand-selecting the final 444 options, which were sold as NFTs. Owners could only see which design they received after the final reveal, as is custom in NFT collection drops. The DressX Generative Hat collection was created using tools such as Substance Clo3D Cinema 4D and Houdini Software to... The DressX Generative Hat collection was created using tools such as Substance, Clo3D, Cinema 4D and Houdini Software to combine colours, textures, accessories, partner designs and various DressX logo colours to reflect different tiers. “Generative fashion can be thought of as a styling tool that provides multiple options for combinations and variations,” says cofounder Natalia Modenova. Photo: DressX MOST POPULAR FASHION Super surprise: Why Louis Vuitton chose KidSuper to co-create next menswear collection BY LAURE GUILBAULT COMPANIES Dior CEO Beccari to lead Louis Vuitton in LVMH management shuffle BY LAURE GUILBAULT BEAUTY Estée Lauder’s new app helps visually impaired users apply makeup BY MALIHA SHOAIB ADVERTISEMENT This process illustrates how a brand might custom-select the silhouette and design elements before letting the AI take the wheel — within reason. DressX used a combination of available plug-ins and an internally built tool to generate the results. In the future, this process is potentially something that can be used by outside brands and partners. To that end, supply chain tech company Cala recently added Dall-E tech to aid in physical designs of clothing, accessories and shoes. With Dall-E, users can input a text-based prompt (up to 400 characters) to receive four images generated using AI, based on combing images online and combining them. Designers using Cala’s software can use a verbal prompt to generate and refine product imagery, or they can upload a product photo to see modified versions; both enable the user to edit results and ultimately send items into physical production, using Cala’s previously existing software and services. Supply chain startup Cala a launch partner with DallE incorporated its technology into its software. People can input a... Supply chain startup Cala, a launch partner with Dall-E, incorporated its technology into its software. People can input a few prompts to generate design suggestions, preview it as an illustration or photorealistic depiction and tweak the design before ultimately producing it physically. Photos: Cala Cala CEO and co-founder Andrew Wyatt says that tools like this can be used as an antidote to fast fashion and overproduction; he is working toward a future in which a person might co-create a piece that is produced only upon purchase. “The amount of variety of what is available to customers is [already] more dramatic, and we think there will ultimately be an infinite assortment,” he says. MOST POPULAR FASHION Super surprise: Why Louis Vuitton chose KidSuper to co-create next menswear collection BY LAURE GUILBAULT COMPANIES Dior CEO Beccari to lead Louis Vuitton in LVMH management shuffle BY LAURE GUILBAULT BEAUTY Estée Lauder’s new app helps visually impaired users apply makeup BY MALIHA SHOAIB ADVERTISEMENT But what of a house's trademark look and feel? A customer coming to Diane von Furstenberg might be looking for a wrap dress in a bold print; while a Gucci customer comes for maximalist, patterned layers. Wyatt says that brands could potentially add plug-ins on their own e-commerce sites, so customers can personalise pieces by choosing from a collection of pre-made features that are limited to the brand’s styles, colours or silhouettes. This type of workflow could also be used internally to jumpstart the creations of additional designs based on an existing theme, Google’s Breheny says, which is already something that design teams do manually. “If you have a signature style, you can put these images in as part of your prompt as a way of building out a collection.” In terms of creativity and AI, Cala’s Wyatt says, “There are two schools of thought: ‘Oh no! AI is bad!’, or, ‘How can we leverage it to be successful?” We wanted to make this simple, palatable and easy to use for designers.” Computers versus creative directors As with most new technologies, generative art invites some natural quandaries. It’s unclear to what extent someone might create — or sell — a design “inspired” by another brand or designer. Fashion is notoriously self-referential, but adding a brand’s logo might eventually be considered a step too far. Already, Instagram accounts such as Str4nge Thing and AI Clothing Daily have referenced Nike logos and silhouettes, although their output has so far remained in the art, rather than fashion, category. The artist behind Str4nge Thing, who prefers to remain anonymous, initially saw success by bringing Nike clothing into Renaissance paintings, specifically because of Nike’s influence within culture. “[Now,] my DMs are always full of people asking if they can buy my designs or when they will be available,” they say. Field Skjellerup, who operates AI Clothing Daily (which made the bejewelled Nikes and dripping Balenciaga bags), says the creation process is a “call and response between me and the machine” and sees Stable Diffusion “as a collaborative tool that has the potential to be both extremely rewarding and dangerous, depending on whose hands it passes through”. Already, some have worried that avatar creation apps are unfairly stealing work from human artists, or at least their creative styles. This wasn’t helped by the fact that many artists in the past few years saw their work erroneously turned into NFTs without their knowledge, Breheny points out. MOST POPULAR FASHION Super surprise: Why Louis Vuitton chose KidSuper to co-create next menswear collection BY LAURE GUILBAULT COMPANIES Dior CEO Beccari to lead Louis Vuitton in LVMH management shuffle BY LAURE GUILBAULT BEAUTY Estée Lauder’s new app helps visually impaired users apply makeup BY MALIHA SHOAIB ADVERTISEMENT Field Skjellerup who operates Instagram account AI Clothing Daily creates works inspired by fashion from the 1980s... Field Skjellerup, who operates Instagram account AI Clothing Daily, creates works inspired by fashion from the 1980s through the 2010s, using text-to-image tool Stable Diffusion. Photo: AI Clothing Daily People often misunderstand or misrepresent how these systems work, Google’s Breheny says. For example, if Lensa generated a buxom version of oneself, that might be because the images on the internet might prioritise that silhouette, so the tech is reflecting existing human bias. Recently, he tried to generate a fashion photoshoot using Asian, African, European and Indian models, but found that the Indian model was more challenging. He thinks this is because Indian models might be underrepresented in fashion. “We get whatever bias is inherent, so we need a human element to stop it,” he says. In other words, AI still often needs a human to successfully execute. On a similar note, he has found that “idealised” portraits don’t often result in the very real, asymmetrical nature of human faces, so creatives might have to manually add back in flaws. His favoured tool, Midjourney, has introduced “cursed mode”, which throws in a bit of chaos to the mix. And of course, there is the longstanding concern that computers might obviate human creativity. However, early adopters say that it aids in, rather than replaces, the creative process. Freelance art director and extended reality and 3D artist Josephine Miller uses a combination of Dall-E, Photoshop and Clo3D in her design process. The AI doesn’t create the entire design, but rather has cut the conceptual phase in half, often resulting in hundreds of variations. “Sometimes, it will change or add something to the design that will make me question what I was going for,” she says. She has found it works best when prompts are specific and concise. A recent jacket design, which went through “tonnes of iterations”, was made using the prompt, “Model wearing oversized futuristic tech wear jacket, monochromatic with geometric lines, cover photo”. MOST POPULAR FASHION Super surprise: Why Louis Vuitton chose KidSuper to co-create next menswear collection BY LAURE GUILBAULT COMPANIES Dior CEO Beccari to lead Louis Vuitton in LVMH management shuffle BY LAURE GUILBAULT BEAUTY Estée Lauder’s new app helps visually impaired users apply makeup BY MALIHA SHOAIB ADVERTISEMENT Josephine Miller created this model using AI to illustrate a jacket design using DallE. Josephine Miller created this model using AI to illustrate a jacket design using Dall-E. Photo: Josephine Miller If history attests — in the case of photography, as photographer Knight recently pointed out — the concern that technology will dismantle creativity or take the “soul” out of artistic expression is largely unfounded, although it will likely change job functions or skillsets. Miller says that while she doesn’t believe AI will steal jobs, it will challenge the traditional workflow and lead to new roles, such as a prompt engineer, who, similar to a conceptual artist, could assist the creative director. It might also make the human touch even more lucrative. “We believe it will increase the value of handmade content,” says Shvets, of Reface. “Nothing will ever replace human art. I believe that new technologies are an inevitable part of our evolution. People believed that machines would take our jobs, and yet we are surrounded by tech. Something that can be optimised by generative AI should be optimised, because you can become 10 times faster.” He is also aware of the feedback that “borrowing” the styles of certain living artists can be untoward. To that end, Reface is “working in the direction of collaborating with artists”, he says. He compares this moment in time to pirating music, before the advent of Spotify and licensing fees. “It’s important to show that this tech can work and be adopted by a lot of people and used for good.” The past, fittingly, serves as an appropriate inspiration. “I would argue that Google has done a lot more help than harm,” Wyatt says by way of comparison. “But it did break the original hierarchy of, you had to be at such-and-such college to get access to their library to be able to get the information that is now available to everyone.” |
Consumer Discretionary |
Consumer Discretionary: Leisure Products Consumer Discretionary: Textiles, Apparel & Luxury Goods |
Nick Knight | view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/11/2023 |
I had ChatGPT write me a trading strategy that resulted in a gain of 50,462.98 USD. Not a direct consumer brand use for AI, but fascinating use case for using ChapGPT to predict. |
I had ChatGPT write me a trading strategy that resulted in a gain of 50,462.98 USD 🚀 📈 Trading has been a passion of mine since I first started it 15 years ago and to this day is still one of my favourite past times, probably why I ended up joining Keyrock in the first place. However, it has still always been a manual process, focused on my particular favourite triangles and star patterns, or technical analysis as you may call it. 🤖 Automation was therefore always the next step, but I stopped trying to code when I could not align two of my favourite pokemon images in my first HTML website at 10 years old, feeling as though I was stuck in a horribly formatted word document. However, over the weekend I thought I'd ask ChatGPT to help me, and I experimented a bit with asking it to build a simple Stoch RSI crossover strategy in Pinescript, the native programming language of TradingView. After some tweaking here and there (quite a few errors), it delivered, with a decent return of 50.46% over 1800 trades on the BTC/USDT pair on the 6h chart. Will definitely be doing a lot of backtesting of various strategies that I have been doing by hand over the years. I guess the next step is having ChatGPT connect the strategy to my trading account and actually running it. Mike, Peter, or Tone have you or anyone else tried this yet and come up with some strategies to benefit your own trading? Disclaimer: This is just a backtest, do not trade this strategy without having the expertise or knowledge of capital markets and the risk you place on your capital. I have not done so either. #trading #chatgpt #strategy #automation |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/11/2023 |
Jasper.ai and copy.ai to write copy |
Write content faster for content marketing Come up with many different copy versions fast and then can tweak, mix, combine as needed. Increases quality and productivity and generates a lot more interations to test in marketing. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/11/2023 |
AI image generators |
top ones as of this posting are Stable Diffusion (local install for local GPU processing), Mid Journey, Open.AI OpenArt and Lexica: search for images and find prompts and seeds. Looks like interesting way to narrow down something you want and then use the prompt and seed to create variations. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/11/2023 |
AI design tools
|
Come up with designs faster from which work off of. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/11/2023 |
Code completion - many providers. Do a search |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/11/2023 |
Voice overs and voice editing - many providers. Do a search |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/11/2023 |
OpenAI Chat uses cases |
|
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/11/2023 |
OpenAI Chat large list of prommpts |
https://prompts.chat/ | view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/11/2023 |
AI-powered influencers |
AI-generated online influencers are always on, and constantly analyze the community sentiment and adjust accordingly. They can replace human online influencers because they are always on and do a better job at customizing their interactions based on real-time data analysis. It can extend to online communities that replace or augment human facilitators with AI-powered ones that can tune their interactions with each person to maximize utility and participation. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/11/2023 |
Category list of generative AI tools |
-Text to image (T2I) -Text to Video (T2V) -Text to Audio (T2A) -Text to Text (T2T) -Text to Motion (T2M) -Image to Text (I2T) -Audio to Text (A2T) -Audio to Audio (A2A) -Brain to Text (B2T) -Text to Code (T2C) -Text to 3D (T23D) -Text to NFT (T2N) |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/11/2023 |
Generative AI Art SOP: https://docs.google.com/document/d/1YH9-8RATyseuqbhBy9qBdhG_pIc88yt3ePokT8eyRkk/edit?usp=sharing |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/09/2023 |
AI image generator hack for creating library of content and tracking. Added to tool AI technologies enhancing web3 |
We got to work and started brain-dumping (which is different from brainstorming) all types of creative prompts to see what types match the style of the brand. We found that keywords such as Miami vice, purple and teal, neon, 8k, futurist city, and more really represented the style of the Furlough brand. Once we had the style we started coming up with different assets such as a microphone for Build in Radio and a brain for the Mastermind to build the piece around. We put it all onto a Google sheet and then use the conjunction function to create the final prompt. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
01/03/2023 |
Added a link to extensive list of ChaptGPT prompts to use as examples and for modeling. See tool AI technologies enhancing web3 |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
12/16/2022 |
Added category list of generative AI tools in how they might be categorized, to the tool: "AI_technologies_enhancing_web3" |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
12/13/2022 |
Added a section titled "Generative AI Use Policy" to the tool "Other_technologies_enhancing_web3" covering some basics about using generative AI so that it does not infringe on intellectual property rights. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
12/13/2022 |
Split AI into its own tool apart from the tool "Other_technologies_enhancing_web3" and created its own entry on the knowledgebase roadmap |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
12/12/2022 |
Added to tool "Other_technologies_enhancing_web3" vendors for generative AI in categories including marketing copy, images, graphic design, code completion, voice overs and vide editing, and OpenAIChat uses. |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
12/08/2022 |
The Meaning of Life - how AI cannot have subjective experiences, is the average of all prior data, and cannot apply context or meaning to art like humans can. Discord link |
The Meaning of Life brandon@formless.xyz 2022-12-07 This essay is edited from a talk I gave at Boston University NextTech titled The Meaning of Life. You can watch the presentation here. Abstract We've reached a point where machines can emulate humans at a very high level (ChatGPT, LaMDA, DALL-E). Computers can now generate art, music and language. But is there meaning in the output of a machine? Physics tells us that the world is purely material, e.g. the human experience is simply a computation that can be replicated by a machine. What are the implications of a purely physical world? With the acceleration of AI, will we reach a point where behavior alone implies consciousness, or is there something distinct about human experience, and as a result, human art? This is an opinion editorial piece from Brandon Tory Thorpe, Artist and former Staff Engineer at Google AI. It is intended for audiences interested in Artificial Intelligence and The Arts, with no prerequisite engineering experience required for readability. Free Will Tonight I buzzed my lips, my five month old daughter buzzed back. We held a conversation this way for fifteen minutes. With no clue what the other meant, we only knew what we felt...in sync, in agreement––the fundamental unit of understanding. I know through the years that follow we'll build layers of communication on top of the protocol we defined today. We'll talk politics and travel. We'll talk religion and technology. But what it all compiles down to are two humans buzzing their lips at each other because of the feeling of mutual understanding, the empathy and beauty that it creates within us. Our feelings are not a side effect of our physical reality. Our feelings are our reality. All other communication, including the experience of life itself, is a way to encode these feelings at a higher level of abstraction. There's a conversation in computer science today about what it means to be intelligent, and transitively, what it means to have consciousness. The majority of physicists believe that all things are physical, e.g. every phenomena in the universe is a composition of particles governed by known laws. This position, typically called mechanics (both Quantum and Newtonian), is what got us Planes, Self-Driving Cars, Wifi and to the Moon. Consequently, this also means that with enough computational power, every event in the universe can be predetermined. This is where if you don't pay close attention to the coded language you can miss the implication. If the universe is completely deterministic, then there is no free will. By free will I do not mean omnipotence, where one can do anything, rather I mean that within any system, an entity can make a conscious decision which suspends the outcome of the system such that full knowledge of the outcome cannot be precomputed. With enough computational power we will see a day when a computer will predict everything that you will do and say, before you do or say it. Our science tells us that this is the truth, and yet our intuition makes humans cling to the idea that our thoughts are our own. Our decisions are our own. Our experiences are our own. Why do we cling to our intuition in instances when science tells us otherwise? Our intuition is wrong quite often and yet our intuition is what made us the dominant species on the planet. Is it worth reconciling our objective analysis of the world with our intuition? A.I. and Subjective Experience Natural Language Understanding (NLU) is a form of Artificial Intelligence that can do things like answer hard questions really fast by learning the meaning of human language. Given a question, NLU systems can retrieve information where the intention seems to mean the answer with a high likelihood. In Machine Learning these tasks are sometimes called "retrieval", or "sequence to sequence" modeling. Once you have a vector of meaning you can do interesting things like search a space of all other "meaning" vectors to find the nearest neighbors, and that's where the ability to retrieve answers really fast starts to shine. You can also encode and decode meaning into alternate representations such as dialog used in conversational AI. With unlimited compute, this architecture yields models that can empower machines to read all the books on earth and answer any question you can think of. This works really well for information grounded in objective reality. Where it breaks down is in subjective experience. Where art succeeds, physics, math and engineering all fail at encoding and transmitting subjective experiences. In 2020 I lost a friend to the Pacific Ocean. A strong Brooklyn born fighter in peak athletic shape, he gave up his own life to save his son caught in a rip current. I say this to emphasize that the meaning of the word ocean may be different for me than it is for you. Aggregating statistical patterns produces an objective representation of the ocean, but cannot contextualize my individual fear of drowning, and hence the feeling that the ocean gives me. Privacy and Freedom It's a paradox that brings us back to the question of free will. It's not that subjective experience is too big to fit in memory, or too hard to encode—it's that it is by nature private. It is not possible for you to have the same subjective experience as me, without being me. The fact that my decisions and experiences are private is virtually synonymous with my being free. This is because if my inner thoughts and experiences are public, then there exists some objective description of my behavior which can be used to, for example, put me in jail before I've committed a crime. The beauty of our human languages, of our art, our music all lie in the ability to imperfectly communicate a private subjective experience, from one human to another. The artist is the courier of a message. They cannot ensure that everyone will understand, but those that do, understand deeply. We've taken it as a given that objectivity is the base of the meaning of the world, and that our emotions are built on top of those objective truths (mechanics) as a somewhat inconvenient side effect of being human. But what if our emotions are fundamental, and the entire point of objective reality is only to ground language, e.g. an environment in which to pull context when communicating sequences of emotions. A way to give the words meaning, such that we can communicate with each other more deeply. In other words, my feelings about water will never mean the same thing to you unless you've experienced the ocean. There are messages in this world for us to receive that can only be understood through experiences. Language alone is meaningless. Consciousness To experience anything requires consciousness. The world was created as a way for us to communicate meaning between conscious entities. Where math is a lossless encoding that communicates objective reality, language is a lossy encoding that attempts to vectorize subjective experience and ground it within the subjective experience of someone else, e.g. by metaphor, or storytelling. If AI is unable to have its own experiences it can never ground meaning subjectively, meaning it will always represent inputs using a learned space that's essentially based on the aggregate meaning across all of its training data. If AI is able to have its own experiences, then by metaphor it can build representations based on its own learned experiences, but those experiences will be limited to that which a machine can have. Recommendations from unconscious AI are averages often lacking in polarity that makes art piercing. Recommendations from conscious AI would in theory be based on the AI's personal preferences, e.g. based on machine experience. The best recommender system would be a machine which experiences the world in the same way that we do, has its own private experience, and grounds meaning subjectively. When framed this way, the best recommender system is a human. Numbers Do Lie The question is this: is language meant to describe the world or is the world meant to describe language? As discussed, meaning can be vectorized as a representation using an encoder. This is a sequence of numbers that can hold the meaning of a phrase, paragraph, entire novel––or song. Theoretically computing distances between these vectors should give the most objective analysis of the similarity of meaning. This is how recommender systems, like Google Search, or Spotify Music recommendations work. We represent art in the form of a number and recommend similar ones. But the numbers do lie. They lie because art is meant to communicate a subjective experience. Using metrics to quantify meaning only yields the aggregate meaning...not the personal, painful, beautiful, strange, meaning that only you understand––because you've been through the same experience as the artist. Worse yet, this creates a feedback loop, where successful content with the same "on average" meaning is constantly elevated in the ranking algorithm, which incentivizes artists to play the average in order to remain economically viable. As recommender systems become more prominent, all art may converge to meaning the same thing: pay me. Conclusion & Going Forward In conclusion, being you is a private experience. This is the beauty of life. This is why, particularly in art, word of mouth is so powerful...in our honest, imperfect attempts at sharing our private first hand experiences with one another, we gain trust in a way that cannot be replicated in an objective recommender system. As we increase compute power we must remember to value that which makes us human. And with that, think deeply on what the most important areas of work are for humans in the future. |
view | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
12/07/2022 |
Generative AI has exploded in the latter half of 2022. Discord Links: |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
12/01/2022 |
Raoul Pal & E. Mostaque: A Complete SHIFT in Society! Discord link |
view | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
11/06/2020 |
NotCo’s machine learning tool, which it calls Giuseppe, taps into the company databases of thousands of plants and plant-based ingredients. By analyzing the molecules in the food, it learns which combinations make, say, cow’s milk and then generates formulas to match |
Create plant-based products to match taste, texture and performance of animal-based products Read more at https://www.evernote.com/l/AAVLM-vFFipD_7i3QHQswm2Dn1Rtyn14HyQ/ |
view |