Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnSquareMore
a16z "Big Ideas for 2026: Part One"

a16z "Big Ideas for 2026: Part One"

Block unicornBlock unicorn2025/12/10 18:04
Show original
By:Block unicorn

This article will share perspectives from the Infrastructure, Growth, Bio + Health, and Speedrun teams.

This article will share perspectives from the Infrastructure, Growth, Bio + Health, and Speedrun teams.


Written by: a16z New Media

Translated by: Block unicorn


As investors, our responsibility is to deeply understand every corner of the technology industry in order to grasp future trends. Therefore, every December, we invite our investment teams to share one major idea they believe technology companies will tackle in the coming year.


Today, we will share perspectives from the Infrastructure, Growth, Bio + Health, and Speedrun teams. Stay tuned for insights from other teams tomorrow.


Infrastructure


Jennifer Li: How Startups Can Navigate the Chaos of Multimodal Data


Unstructured, multimodal data has always been the biggest bottleneck for enterprises—and their largest untapped treasure. Every company is awash in oceans of PDFs, screenshots, videos, logs, emails, and semi-structured data. Models keep getting smarter, but input data is becoming increasingly chaotic, leading to RAG system failures, agents breaking down in subtle and costly ways, and critical workflows still heavily reliant on manual quality checks. The limiting factor for AI companies today is data entropy: in the world of unstructured data, freshness, structure, and authenticity are in constant decline, and 80% of enterprise knowledge now resides in this unstructured data.


For this reason, untangling unstructured data has become a once-in-a-lifetime opportunity. Enterprises need a continuous approach to clean, build, validate, and manage their multimodal data to ensure downstream AI workloads can truly deliver. Use cases are everywhere: contract analysis, onboarding, claims processing, compliance, customer service, procurement, engineering search, sales enablement, analytics pipelines, and all agent workflows that rely on reliable context. Startups that can build platforms to extract structure from documents, images, and videos, resolve conflicts, repair pipelines, or maintain data freshness and retrievability, hold the keys to the kingdom of enterprise knowledge and processes.


Joel de la Garza: AI Revitalizes Cybersecurity Hiring


For most of the past decade, the biggest challenge facing Chief Information Security Officers (CISOs) has been hiring. From 2013 to 2021, cybersecurity job vacancies grew from less than 1 million to 3 million. This is because security teams hired large numbers of technically skilled engineers to perform tedious, repetitive level-one security work every day, such as log review—work that nobody wants to do. The root of the problem is that cybersecurity teams bought products that could detect everything, creating this tedious work, which meant their teams had to review all information—this, in turn, created a false sense of labor shortage. It’s a vicious cycle.


By 2026, AI will break this cycle and fill the hiring gap by automating much of the repetitive work of cybersecurity teams. Anyone who has worked in a large security team knows that half the work could be easily automated, but when the workload piles up, it’s hard to identify which tasks should be automated. Native AI tools that help security teams solve these problems will ultimately free them to do what they really want: hunt bad actors, build new systems, and fix vulnerabilities.


Malika Aubakirova: Native Agent Infrastructure Will Become Standard


By 2026, the biggest infrastructure shock will not come from external enterprises, but from within. We are moving from predictable, low-concurrency “human-speed” traffic to recursive, bursty, and large-scale “agent-speed” workloads.


Today’s enterprise backends are designed for a 1:1 ratio of human operations to system responses. They are not architected for a single agent “goal” to trigger 5,000 sub-tasks, database queries, and internal API calls at millisecond scale in a recursive fan-out. When an agent tries to refactor a codebase or fix security logs, it doesn’t look like a user. To traditional databases or rate limiters, it looks like a DDoS attack.


Building systems for agents in 2026 means redesigning the control plane. We will witness the rise of “agent-native” infrastructure. Next-generation infrastructure must treat the “thundering herd” effect as the default state. Cold start times must be shortened, latency fluctuations drastically reduced, and concurrency limits multiplied. The bottleneck is coordination: routing, locking, state management, and policy enforcement in large-scale parallel execution. Only platforms that can handle the ensuing flood of tool executions will ultimately win.


Justine Moore: Creative Tools Go Multimodal


We now have the building blocks for storytelling with AI: generative voice, music, images, and video. But for anything beyond one-off clips, getting the desired output is often time-consuming and frustrating—or even impossible—especially if you want something close to traditional director-level control.


Why can’t we feed a model a 30-second video and have it continue the scene with new characters created from reference images and sounds? Or reshoot a video so we can view the scene from different angles, or have actions match a reference video?


2026 will be the year AI goes multimodal. You’ll be able to provide models with any form of reference content and use it to create new content or edit existing scenes. We’ve already seen some early products, such as Kling O1 and Runway Aleph. But there’s still much work to be done—we need innovation at both the model and application layers.


Content creation is one of the most impactful applications of AI, and I expect to see many successful products emerge, covering a wide range of use cases and customer segments, from meme creators to Hollywood directors.


Jason Cui: The AI-Native Data Stack Continues to Evolve


Over the past year, as data companies shifted from focusing on specialized areas such as data ingestion, transformation, and computation to bundled unified platforms, we’ve seen the “modern data stack” consolidate. For example: the Fivetran/dbt merger and the continued rise of unified platforms like Databricks.


Although the entire ecosystem has clearly matured, we are still in the early stages of truly AI-native data architecture. We are excited about how AI continues to transform multiple layers of the data stack, and we are beginning to realize that data and AI infrastructure are becoming inseparable.


Here are some directions we are optimistic about:


  • How data will flow into high-performance vector databases alongside traditional structured data
  • How AI agents will solve the “context problem”: continuously accessing the right business data context and semantic layer to build powerful applications, such as interacting with data, and ensuring these applications always have the correct business definitions across multiple systems of record
  • How traditional business intelligence tools and spreadsheets will change as data workflows become more agent-driven and automated


Yoko Li: The Year We Step Into Video

a16z


By 2026, video will no longer be content we passively watch, but rather a space we can truly inhabit. Video models will finally be able to understand time, remember what they have already shown, react to our actions, and maintain the kind of reliable consistency found in the real world. These systems will no longer generate just a few seconds of fragmented footage, but will be able to sustain characters, objects, and physical effects long enough for actions to have meaning and consequences to unfold. This shift will turn video into an ever-evolving medium: a space where a robot can practice, a game can evolve, a designer can prototype, and agents can learn by doing. The final result will look less like a video clip and more like a living environment—one that begins to bridge the gap between perception and action. For the first time, we will feel as if we can step inside the videos we generate.


Growth


Sarah Wang: Systems of Record Lose Dominance


By 2026, the true disruptive change in enterprise software will be that systems of record will finally lose their dominance. AI is narrowing the gap between intent and execution: models can now directly read, write, and reason over operational data, transforming IT Service Management (ITSM) and Customer Relationship Management (CRM) systems from passive databases into autonomous workflow engines. As advances in reasoning models and agent workflows accumulate, these systems will not only respond, but also predict, coordinate, and execute end-to-end processes. Interfaces will shift to dynamic agent layers, while traditional systems of record will recede into the background as a generic persistence layer—their strategic advantage will be ceded to whoever truly controls the agent execution environment that employees use daily.


Alex Immerman: Vertical AI Evolves from Information Retrieval and Reasoning to Multi-Party Collaboration


AI has driven unprecedented growth in vertical industry software. Healthcare, legal, and real estate companies have reached over $100 millions in annual recurring revenue (ARR) within just a few years; finance and accounting are close behind. This evolution began with information retrieval: finding, extracting, and summarizing the right information. 2025 brought reasoning capabilities: Hebbia analyzes financial statements and builds models, Basis reconciles spreadsheets across systems, EliseAI diagnoses maintenance issues and dispatches the right vendors.


2026 will unlock multi-party collaboration modes. Vertical industry software benefits from domain-specific interfaces, data, and integrations. But vertical industry work is inherently collaborative. If agents are to represent the workforce, they need to collaborate. From buyers and sellers to tenants, consultants, and vendors, each party has different permissions, workflows, and compliance requirements—understood only by vertical industry software.


Today, each party uses AI independently, resulting in a lack of authorization during handoffs. The AI analyzing procurement agreements doesn’t communicate with the CFO to adjust models. Maintenance AI doesn’t know what field staff promised tenants. The transformation of multi-party collaboration lies in cross-stakeholder coordination: routing tasks to functional experts, maintaining context, and synchronizing changes. Counterparty AIs negotiate within set parameters and flag asymmetries for human review. Senior partner annotations are used to train the company’s entire system. Tasks executed by AI will be completed with higher success rates.


As the value of multi-party and multi-agent collaboration increases, switching costs will rise. We will see network effects in AI applications that have long been elusive: the collaboration layer will become the moat.


Stephenie Zhang: Designing for Agents, Not Humans


By 2026, people will begin interacting with the web through agents. What was optimized for human consumption in the past will no longer be as important for agent consumption.


For years, we have optimized for predictable human behavior: ranking high in Google search results, topping Amazon search, and starting with concise “TL;DR” summaries. In high school, I took a journalism class where the teacher taught us to write news with “5W1H,” and to start feature articles with an engaging lead to hook readers. Perhaps human readers will miss the valuable, insightful arguments hidden on page five, but AI will not.


This shift is also reflected in software. Applications were originally designed to meet human visual and click needs, with optimization meaning good UI and intuitive workflows. As AI takes over retrieval and interpretation, visual design becomes less important for understanding. Engineers no longer stare at Grafana dashboards; AI Site Reliability Engineers (SREs) can interpret telemetry data and post analyses on Slack. Sales teams no longer need to laboriously comb through CRM systems; AI can automatically extract patterns and summaries.


We are no longer designing content for humans, but for AI. The new optimization goal is not visual hierarchy, but machine readability—this will change how we create and the tools we use.


Santiago Rodriguez: The End of “Screen Time” KPI in AI Applications


For the past 15 years, screen time has been the best metric for measuring value delivery in consumer and enterprise applications. We have lived in a paradigm where Netflix streaming hours, mouse clicks in medical EHR user experiences (to prove effective use), and even time spent on ChatGPT are key performance indicators. As we move toward outcome-based pricing models—which perfectly align vendor and user incentives—we will first abandon screen time reporting.


We are already seeing this in practice. When I run DeepResearch queries on ChatGPT, I get tremendous value even if screen time is nearly zero. When Abridge magically captures doctor-patient conversations and automatically executes follow-ups, doctors barely look at the screen. When Cursor develops complete end-to-end applications, engineers are planning the next feature cycle. And when Hebbia drafts presentations from hundreds of public documents, investment bankers can finally get a good night’s sleep.


This brings a unique challenge: per-user pricing for applications will require more complex ROI measurement. The proliferation of AI applications will improve doctor satisfaction, developer efficiency, financial analyst well-being, and consumer happiness. Companies that can articulate ROI in the most concise way will continue to outperform competitors.


Bio + Health


Julie Yoo: The Healthy Monthly Active User (MAU)


By 2026, a new healthcare customer segment will come into focus: the “healthy monthly active user.”


Traditional healthcare systems primarily serve three major user groups: (a) “sick monthly active users”—those with fluctuating needs and high costs; (b) “sick daily active users*”—such as patients requiring long-term intensive care; and (c) “healthy young active users*”—those who are relatively healthy and rarely seek care. Healthy young active users face the risk of transitioning to sick monthly/daily active users, and preventive care can slow this transition. But our treatment-oriented reimbursement system rewards treatment, not prevention, so proactive health checks and monitoring services are not prioritized, and insurance rarely covers these services.


Now, the healthy monthly active user segment is emerging: they are not sick, but want to regularly monitor and understand their health—and they may be the largest group among consumers. We expect a batch of companies—including AI-native startups and upgraded versions of existing enterprises—to begin offering regular services to serve this user group.


With AI’s potential to lower healthcare costs, the emergence of new preventive-focused health insurance products, and consumers’ growing willingness to pay out-of-pocket for subscription models, “healthy monthly active users” represent the next high-potential customer segment in health tech: continuously engaged, data-driven, and prevention-focused.


Speedrun (an internal a16z investment team name)


Jon Lai: World Models Shine in Narrative Domains


In 2026, AI-driven world models will revolutionize storytelling through interactive virtual worlds and digital economies. Technologies such as Marble (World Labs) and Genie 3 (DeepMind) can already generate complete 3D environments from text prompts, allowing users to explore them as if in a game. As creators adopt these tools, entirely new narrative forms will emerge, potentially evolving into a “generative Minecraft” where players co-create vast and ever-evolving universes. These worlds can combine game mechanics with natural language programming; for example, players might command, “Create a brush that turns anything I touch pink.”


Such models will blur the line between player and creator, making users co-creators of dynamic shared realities. This evolution could spawn interconnected generative multiverses, allowing genres like fantasy, horror, and adventure to coexist. In these virtual worlds, digital economies will thrive, with creators earning income by building assets, mentoring newcomers, or developing new interactive tools. Beyond entertainment, these generative worlds will also serve as rich simulation environments for training AI agents, robots, and even AGI. Thus, the rise of world models signals not just a new game genre, but a new creative medium and economic frontier.


Josh Lu: “The Year of Me”


2026 will be “The Year of Me”: products will no longer be mass-produced, but tailored for you.


We are already seeing this trend everywhere.


In education, startups like Alphaschool are building AI tutors that adapt to each student’s learning pace and interests, giving every child an education that matches their rhythm and preferences. Such attention would be impossible without spending tens of thousands of dollars per student on tutoring.


In health, AI is designing daily supplement regimens, workout plans, and meal programs tailored to your physiology—no coach or lab required.


Even in media, AI enables creators to remix news, shows, and stories into personalized feeds that perfectly match your interests and tastes.


The biggest companies of the last century succeeded by finding the average consumer.


The biggest companies of the next century will win by finding the individual within the average consumer.


In 2026, the world will no longer be optimized for everyone, but will begin to be optimized for you.


Emily Bennett: The First AI-Native University


I expect that in 2026 we will witness the birth of the first AI-native university—an institution built from the ground up around AI systems.


In recent years, universities have experimented with applying AI to grading, tutoring, and course scheduling. But what is now emerging is a deeper AI: an adaptive academic system that learns and self-optimizes in real time.


Imagine an institution where courses, advising, research collaborations, and even building operations are continuously adjusted based on data feedback loops. Schedules self-optimize. Reading lists are updated nightly and automatically rewritten as new research emerges. Learning paths are adjusted in real time to fit each student’s pace and circumstances.


We have already seen some early signs. Arizona State University (ASU)’s campus-wide partnership with OpenAI has spawned hundreds of AI-driven projects covering teaching and administration. The State University of New York (SUNY) has now made AI literacy part of its general education requirements. These are foundations for deeper deployment.


In an AI-native university, professors will become architects of learning, responsible for data management, model tuning, and guiding students on how to question machine reasoning.


Assessment will also change. Detection tools and plagiarism bans will be replaced by AI literacy assessments; students will be graded not on whether they used AI, but on how they used it. Transparency and strategic use will replace prohibition.


As every industry strives to hire talent who can design, manage, and collaborate with AI systems, this new university will become a training ground, producing graduates fluent in AI system coordination and supporting a rapidly changing workforce.


This AI-native university will be the talent engine of the new economy.


That’s all for today. See you in the next section—stay tuned.

0
0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!
© 2025 Bitget