a16z Predicts Four Major Trends Leading the Way by 2026
AI is driving a new round of structural upgrades in infrastructure, enterprise software, health ecosystems, and the virtual world
Original Title: Big Ideas 2026: Part 1
Original Author: a16z New Media
Translation: Peggy, BlockBeats
Abstract: Over the past year, the breakthroughs in AI have shifted from model capabilities to system capabilities: understanding long sequences, maintaining consistency, performing complex tasks, and collaborating with other intelligent agents. As a result, the focus of industrial advancement has shifted from isolated innovations to redefining infrastructure, workflows, and user interaction.
In the annual "Big Ideas 2026," a16z's four investment teams provided key insights for 2026 from four dimensions: infrastructure, growth, health, and the interactive world.
Essentially, they collectively portray a trend: AI is no longer just a tool but an environment, a system, an acting entity parallel to humans.
The following are the four teams' assessments of the structural changes in 2026:

As investors, our job is to delve into every corner of the tech industry, understand its operational context, and assess the next evolution direction. Therefore, every December, we invite each investment team to share what they believe will be the "big idea" that tech entrepreneurs will tackle in the coming year.
Today, we bring you the viewpoints of the Infrastructure, Growth, Bio + Health, and Speedrun teams. The perspectives of other teams will be released tomorrow, so stay tuned.
Infrastructure Team
Jennifer Li: Startups Will Tame the "Chaos" of Multi-Modal Data
Unstructured, multi-modal data has always been the biggest bottleneck for enterprises and the largest untapped treasure trove. Every company is inundated with PDFs, screenshots, videos, logs, emails, and various semi-structured "data mud." Models are becoming increasingly intelligent, but the inputs are getting more chaotic—this leads to RAG systems generating illusions, causing intelligent agents to err in subtle and high-cost ways and keeping critical workflows highly reliant on manual quality checks.
Today, the real limiting factor for AI companies is data entropy: in an unstructured world that holds 80% of a company's knowledge, freshness, structure, and authenticity are continuously decaying.
It is for this reason that unraveling the "tangled mess" of unstructured data is becoming a generational entrepreneurial opportunity. Enterprises need a continuous approach to clean, structure, validate, and govern their multi-modal data to truly empower downstream AI workloads. The use cases are widespread: contract analysis, user onboarding, claims processing, compliance, customer service, procurement, engineering retrieval, sales enablement, analytics pipelines, and all intelligent agent workflows that depend on reliable context.
A platform-based startup that can extract structure from documents, images, and videos, resolve conflicts, repair data pipelines, and maintain a fresh and searchable data flow will hold the "key to the kingdom" of enterprise knowledge and processes.
Joel de la Garza: AI Will Reshape the Hiring Conundrum of Security Teams
Over the past decade, the biggest headache for CISOs has been recruitment. From 2013 to 2021, the global cybersecurity job gap has surged from less than 1 million to 3 million. The reason is that security teams require highly specialized technical talent, yet they have them engaged in exhausting Level 1 security tasks, such as log parsing, which almost no one wants to do.
The deeper root of the problem is this: security teams have created their own grunt work. They buy tools for "undifferentiated detect-everything," so the team has to "review everything" — which in turn creates an artificial "labor scarcity," forming a vicious cycle.
By 2026, AI will break this cycle by automating most repetitive and redundant tasks, significantly reducing the talent gap. Anyone who has been in a large security team knows that half of the work can be completely automated; the issue is, when you are overwhelmed with work every day, you can't step back to think about what should be automated. Truly AI-native tools will do this for security teams, finally allowing them to focus back on what they originally wanted to do: track attackers, build systems, and fix vulnerabilities.
Malika Aubakirova: Intelligent Agent-Native Infrastructure Will Become the "Standard"
In 2026, the most significant infrastructure shock will not come from the outside but from within. We are moving from "human speed, low concurrency, predictable" traffic to "intelligent agent speed, recursive, bursty, massive" workloads.
The current enterprise backend is designed for a 1:1 "from human action to system response" model. It is not suited to handle a single "goal" trigger from an intelligent agent that sets off 5000 subtasks, database queries, and internal API calls in a millisecond-level recursive storm. When an intelligent agent tries to refactor a codebase or remediate security logs, it is not like a user; to traditional databases or rate limiters, it is more like a DDoS attack.
To build systems for 2026's intelligent agent workloads, the control plane must be redesigned. "Agent-native" infrastructure will start to rise. The next-generation systems must treat the "thundering herd effect" as the default state. Cold starts must be shortened, latency fluctuations must converge, and concurrency limits must scale by orders of magnitude.
The real bottleneck will shift towards coordination itself: routing, lock control, state management, and policy enforcement in large-scale parallel execution. The platform that can survive in the flood of tool invocations will emerge as the ultimate winner.
Justine Moore: Creative Tools Moving Towards Multimodality
We already have the basic building blocks of AI storytelling: generative sound, music, images, and video. However, as long as the content is more than just a short clip, achieving close to director-level control is still time-consuming, painful, and even impossible.
Why can't we feed a 30-second video clip to a model, have it create a new character using the reference images and sound we provide, and continue shooting the same scene? Why can't the model "reshoot" from a new angle or have the action match the reference video?
2026 will be the year when AI truly accomplishes multimodal creation. Users will be able to feed any form of reference content to the model to collaboratively generate new works or edit existing scenes.
We have already seen the emergence of first-generation products such as Kling O1 and Runway Aleph, but this is just the beginning—both the model layer and the application layer require new innovations.
Content creation is one of AI's "killer apps," and I anticipate multiple successful products emerging across various user groups—from meme creators to Hollywood directors.
Jason Cui: AI-Native Data Stack Will Continue to Iterate
Over the past year, the "modern data stack" has been visibly consolidating. Data companies are transitioning from modular services such as collection, transformation, and computation to bundled and unified platforms (e.g., Fivetran/dbt merger, Databricks' expansion).
While the ecosystem has matured, we are still in the early stages of a truly AI-native data architecture. We are excited about how AI will continue to transform various parts of the data stack and are beginning to see data and AI infrastructure irreversibly merging.
We are particularly focused on the following directions:
How data will continue to flow towards high-performance vector databases beyond traditional structured storage
How AI agents will address the "context problem": continuously accessing the correct data semantics and business definitions to enable applications like "conversing with data" to maintain a consistent understanding across multiple systems
How traditional BI tools and spreadsheets will evolve as data workflows become more intelligent and automated
Yoko Li: We Will Truly "Step Into the Heart of Video"

By 2026, video will no longer be a passive viewing content but will become a place we can "step into." Video models will finally be able to understand time, remember presented content, and react to our actions while maintaining a sense of realism and coherence that is closer to the real world, rather than just outputting seconds of unrelated images.
These systems will be able to sustain characters, objects, and physical laws over longer periods, allowing actions to have real impact, enabling causality to unfold. Video will thus transition from a medium to a space where things can be constructed: robots can train within it, game mechanics can evolve, designers can prototype, and agents can learn through "doing."
The presented world will no longer be like a short video but more like a "living environment," beginning to narrow the gap between perception and action. This will be the first time humans can truly "inhabit" the video they create.
Growth Team
Sarah Wang: The Role of the Enterprise "Record System" Will Begin to Shake
By 2026, the true transformation of enterprise software will come from a core shift: the central position of the record system will finally start to decline.
AI is shrinking the distance between "intent" and "execution": models can directly read, write, and infer enterprise operational data, transforming ITSM, CRM, and other systems from passive databases to autonomously executing workflow engines.
With the rapid advancement of inference models and intelligent agent workflows, these systems will no longer just respond to demands but will be able to predict, coordinate, and execute end-to-end processes.
The interface will become a dynamic layer of intelligent agents, while the traditional systems record layer will gradually step back into a "cheap persistent storage," with strategic leadership giving way to players who control the intelligent execution environment.
Alex Immerman: Vertical AI Upgrades from "Information Retrieval and Inference" to "Multiplayer Collaboration Mode"
AI is driving explosive growth in vertical industry software. Companies in the medical, legal, housing sectors have broken $100 million ARR in a short period; finance and accounting are following closely.
The initial revolution was in information retrieval: searching, extracting, summarizing information.
2025 brought about breakthroughs: Hebbia parsed financial statements, Basis reconciled trial balances across multiple systems, EliseAI diagnosed maintenance issues and scheduled vendors.
2026 will unlock the "multiplayer mode."
Vertical software inherently possesses industry-specific interfaces, data, and integration capabilities, and vertical industry work is fundamentally collaborative: buyers, sellers, tenants, consultants, suppliers, each with different permissions, processes, and compliance requirements.
Today, AI operates in silos, leading to chaotic handoffs: AI analyzing contracts cannot communicate with the CFO's modeling preferences; maintenance AI is unaware of on-site personnel's commitments to tenants.
Multiplayer mode AI will break this pattern: automatically coordinate among stakeholders; maintain context; synchronize changes; automatically route to subject matter experts; allow adversaries' AI to negotiate within boundaries and flag asymmetries for human review.
As transactions rise in quality due to "multi-agent + multi-human" collaboration, switching costs will soar—the collaboration network will become the long-missing "moat" of AI applications.
Stephenie Zhang: The Future Object of Creation Will No Longer Be Human, but Intelligent Agents
By 2026, people will interact with networks through intelligent agents, and human-centric content optimization will lose its original significance.
We optimized for predictable human behavior: Google rankings; Amazon top listings; news articles with 5W+1H and enticing openings.
Humans may overlook deep insights buried on the fifth page, but intelligent agents won't.
Software will also change accordingly. Applications were designed for human eyes and clicks in the past, with optimization meaning better UI and processes; as intelligent agents take over retrieval and interpretation, the importance of visual design diminishes: engineers no longer stare at Grafana, AI SREs automatically parse telemetry and provide insights in Slack; sales teams no longer manually flip through CRM, intelligent agents will automatically summarize patterns and insights.
We are no longer designing for humans but for intelligent agents. The new optimization is no longer at the visual level but at machine readability. This will completely transform content creation methods and toolsets.
Santiago Rodriguez: The KPI of "Screen Time" Will Disappear
Over the past 15 years, "screen time" has been the gold standard for measuring product value: Netflix's viewing time; mouse clicks in healthcare systems; minutes users spend on ChatGPT
However, in the upcoming era of "outcome-based pricing," screen time will be completely eliminated.
We are already seeing hints of this: ChatGPT's DeepResearch query requires almost no screen time yet provides tremendous value; Abridge automatically logs doctor-patient conversations and handles follow-up work, allowing doctors to hardly look at the screen; Cursor completes the development of entire applications, and engineers are already planning the next phase; Hebbia automatically generates a pitch deck from a large number of public documents, allowing investment analyst to finally get some sleep
Challenges come along with this shift: companies need to find more complex ROI measurement methods—doctor satisfaction, developer productivity, analyst well-being, user happiness... all of these rise with AI.
Companies that can tell the clearest ROI story will continue to succeed.
Bio+Health Team (Biological and Health Direction)
Julie Yoo: "Health MAUs" Becoming Core User Group
By 2026, a new healthcare user group will take center stage: "Health MAUs" (monthly active healthy individuals).
Traditional healthcare mainly serves three types of people:
- Sick MAUs: high-cost, periodic demand individuals
- Sick DAUs: such as long-term ICU patients
- Healthy YAUs: people who rarely seek medical attention
Healthy YAUs can become Sick MAUs/DAUs at any time, and preventative care could originally delay this transformation. However, proactive monitoring and testing are almost not covered by the current "treatment-oriented" healthcare system.
The emergence of Health MAUs has changed this structure: they are not sick but are willing to regularly monitor their health, making them the largest potential group of individuals.
We anticipate that AI-native startups + "repackaging" of traditional institutions will join in to provide periodic health services.
As AI reduces the cost of healthcare delivery, preventative-focused insurance products emerge, and users are willing to pay for subscription services, "Health MAUs" will become the most promising customer segment for the next generation of health tech—continuously active, data-driven, and prevention-focused.
Speedrun Team (Gaming, Interactive Media, and World Modeling Direction)
Jon Lai: World Modeling Will Reshape Narrative
By 2026, AI world modeling will fundamentally transform storytelling through interactive virtual worlds and the digital economy. Technologies like Marble (World Labs) and Genie 3 (DeepMind) can generate complete 3D worlds from text, allowing users to explore them like playing a game.
As creators adopt these tools, a new form of storytelling will emerge—possibly giving rise to a "generated version of Minecraft," where players collaboratively build a vast, evolving universe.
These worlds will blur the boundaries between players and creators, creating a shared dynamic reality. Different genres such as fantasy, horror, and adventure can coexist; within them, the digital economy will thrive, enabling creators to earn income through asset creation, player guidance, and interactive tool development.
These generated worlds will also serve as training grounds for AI agents, robots, and even potential AGI. The world model brings not just a new genre of games but rather a whole new creative medium at the forefront of the economy.
Josh Lu: "My Year"
2026 will be "My Year": products will no longer be mass-produced for the "average consumer" but tailored to "you."
In education, Alphaschool's AI mentor will match the pace and interests of each student.
In health, AI will customize supplements, exercise plans, and diet programs for you.
In media, AI will remix content in real-time to suit your taste.
In the past hundred years, giants have thrived by finding the "average user"; the giants of the next hundred years will thrive by finding the "individual within the average user."
In 2026, the world will no longer optimize for everyone but will optimize for "you."
Emily Bennett: The First AI-Native University Will Emerge
In 2026, we will witness the first truly AI-native university—an institution built around intelligent systems from scratch. Traditional universities have applied AI for grading, tutoring, and scheduling, but now a deeper transformation is emerging: an "adaptive academic organism" capable of real-time learning and self-optimization.
You can imagine a university where courses, guidance, research collaborations, and campus operations are all dynamically adjusted in real time based on feedback loops; schedules self-optimize; reading lists dynamically update as new research emerges; and each student's learning path evolves in real time.
Precedents have already emerged: Arizona State University's partnership with OpenAI has resulted in hundreds of AI projects; the State University of New York has incorporated AI literacy into general education.
In an AI-native university:
- Professors become "Learning System Architects": designing data, tweaking models, teaching students how to scrutinize machine reasoning.
- Evaluation methods will shift to "AI mindfulness" assessment: not asking if students used AI, but how they used AI.
With industries in dire need of talent capable of collaborating with intelligent systems, this university will become the "talent engine" of the new economy.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Zcash Halving: What It Means for Cryptocurrency Investors in 2025
- Zcash's 2028 halving will reduce annual inflation to 1%, reinforcing its deflationary model after prior 50% block reward cuts in 2020 and 2024. - The 2024 halving triggered 1,172% price surge followed by 96% drop, highlighting volatility risks despite growing institutional investments like Grayscale's $137M Zcash Trust. - Privacy-focused hybrid model (shielded/transparent transactions) attracts institutional interest but faces EU MiCA regulatory scrutiny, requiring selective compliance strategies. - Inve
CleanTrade and the Evolution of Clean Energy Markets: Market Fluidity, Openness, and the Role of the CFTC
- CleanTrade, a CFTC-approved SEF, transforms clean energy markets by integrating VPPAs, PPAs, and RECs under institutional-grade transparency. - The platform unlocks liquidity through real-time pricing and centralized trading, accelerating net-zero transitions for corporations and utilities . - Enhanced transparency via project-specific REC data combats greenwashing, while regulatory alignment boosts investor confidence and market legitimacy. - By bridging traditional and renewable energy markets, CleanTr

The CFTC-Authorized Clean Energy Marketplace: An Innovative Gateway for Institutional Investors
- REsurety’s CleanTrade platform, CFTC-approved as a SEF, addresses clean energy market illiquidity and opacity by centralizing VPPAs, PPAs, and RECs. - Within two months of its 2025 launch, it attracted $16B in notional value, enabling institutional investors to streamline transactions and reduce counterparty risk. - By aggregating market data and automating compliance, CleanTrade enhances transparency, aligning with ESG priorities and regulatory certainty for institutional portfolios. - It democratizes a

SOL Drops 50%: Is This a Healthy Market Adjustment or the Onset of a Major Sell-Off?
- Solana's 50% price drop sparks debate over whether it signals a bear market correction or deeper structural selloff. - On-chain metrics show liquidity contraction and reduced exchange supply, but ETF inflows and validator activity suggest structural resilience. - Corporate transfers and the Upbit hack highlight volatility risks, while Solana's alignment with Bitcoin's trend underscores macroeconomic influence. - Key watchpoints include liquidity recovery timelines, ETF inflow sustainability, and potentia

