Last Updated on
October 14, 2025
By
Excedr
Some companies build AI to write text or draw pictures. Lila Sciences is trying to build one that can do something stranger: make discoveries.
The Cambridge-based startup wants to teach artificial intelligence how to think like a scientist—how to ask questions, design experiments, and learn from the results. It’s a moonshot that sounds almost philosophical, except the company is actually doing it.
Lila spent two quiet years inside Flagship Pioneering, the venture studio best known for spinning out Moderna and other deep-tech bets. When it finally came out of stealth in early 2025, the company described its work as the creation of a “scientific superintelligence.” The phrase raised eyebrows, but the idea behind it is simple. If AI can reason about the world and robotic labs can test those ideas, maybe science itself can run faster.
Lila Sciences' version of that idea lives in what it calls AI Science Factories—automated labs packed with robotics, sensors, and specialized models that run experiments nonstop. The system dreams up hypotheses, carries them out, measures the results, and decides what to do next. Humans still guide the big questions, but the day-to-day discovery loop belongs to the machines.
The company isn’t trying to replace scientists. It’s trying to give them an engine—one that never sleeps, never gets bored, and learns with every cycle. Instead of waiting weeks for results, a researcher could explore entire branches of an idea in a single afternoon.
What makes Lila interesting isn’t just the technology. It’s the shift in thinking behind it. Most AI efforts in science treat data as something to be mined. Lila treats it as something to be made. By running its own experiments, it can create original datasets no one else has, feeding them back into its models until the system becomes self-improving.
For all its complexity, most modern research still happens the old-fashioned way: people in lab coats moving liquids, labeling tubes, and waiting for reactions. Even in cutting-edge fields like genomics or drug discovery, many of the core workflows—pipetting, plating, analyzing samples with chromatography or mass spectrometry—look strikingly similar to what they did thirty years ago.
Automation has chipped away at the inefficiencies, but only at the margins. Robotic liquid handlers can speed up sample prep, and plate readers can run assays overnight, but the process still depends on human direction. Someone has to decide what to test, how to test it, and what it means when the results come back.
Lila’s founders see that as the real bottleneck. They argue that discovery itself—not just data processing—needs to be automated. Instead of researchers generating hypotheses and using machines to execute them, Lila’s AI generates its own hypotheses and uses machines to validate them. The system closes the loop between thought and experiment, allowing scientific exploration to happen continuously, not sequentially.
“Most AI in science runs out of things to learn because it’s trained only on public data,” said CEO Geoffrey von Maltzahn. “The next leap forward will come from AI that creates its own data.”
Inside Lila’s AI Science Factories, the standard toolkit of research—liquid handlers, robotic arms, high-throughput screening stations, imaging systems, and analytical instruments—has been rewired into a closed-loop ecosystem. AI models design experiments, direct the instruments, interpret the readouts, and decide what to test next.
A machine might run a combinatorial chemistry experiment one minute, switch to cell imaging the next, then analyze reaction yields using spectroscopy or chromatography data—all without human intervention. The output isn’t just results; it’s new training material. Each run generates proprietary datasets that improve the models controlling the system.
This approach turns the laboratory into a living network of instruments that both produce and consume knowledge. Thousands of experiments can unfold in parallel, each one refining the AI’s understanding of what’s worth exploring next.
Rather than keeping the technology in-house, Lila plans to open its platform to enterprise partners. Pharmaceutical, energy, and semiconductor companies could eventually tap into the system through software, outsourcing parts of their R&D pipelines to a machine that not only runs experiments faster, but learns why some work and others fail.
It’s a redefinition of what a lab is. Not a place filled with people using machines, but a machine filled with intelligence—one that’s learning how to do science for itself.
Lila’s founders aren’t first-time dreamers trying to reinvent the lab. They’re the kind of people who’ve built it before—many times over. The company traces its roots to Flagship Pioneering, the Cambridge venture studio known for turning high-concept science into real companies like Moderna, Tessera Therapeutics, and Generate Biomedicines.
Among Lila’s founding team are Noubar Afeyan, Geoffrey von Maltzahn, Molly Gibson, Jacob Feala, Alexandra Sneider, and Ben Kompa, each with deep experience in engineering, computation, or synthetic biology. The group shares a history of working at the intersection of automation, machine learning, and experimental science—domains that have been converging slowly for years.
The team’s scientific anchor is George Church, Harvard geneticist and one of the field’s most prolific tinkerers. Church’s early work on high-throughput sequencing and gene synthesis helped define how modern labs operate—turning once-manual benchwork into programmable systems of robotics, reagents, and sensors. His involvement signals that Lila’s ambitions aren’t speculative; they’re an extension of decades-long trends in lab automation and experimental miniaturization.
At the helm is Geoffrey von Maltzahn, PhD, a biotechnologist with a history of launching platform companies that scale ideas through infrastructure. His past ventures helped standardize technologies like directed evolution and DNA design automation. With Lila, he’s extending that logic to the scientific process itself—treating discovery as an engineering problem that can be optimized, parallelized, and scaled.
The idea is as philosophical as it is practical: what if the act of science could run continuously, like code executing across a distributed network of robotic labs?
That question has attracted serious capital. Lila emerged from stealth in March 2025 with an unprecedented $200 million seed round, led by Flagship Pioneering and joined by General Catalyst, March Capital, ARK Venture Fund, Altitude Life Science Ventures, Blue Horizon Advisors, the State of Michigan Retirement System, Modi Ventures, and a subsidiary of the Abu Dhabi Investment Authority.
Six months later, it raised a $235 million Series A co-led by Collective Global and Braidwell, valuing the company at roughly $1.23 billion. An October 2025 extension added $115 million more, including an investment from Nvidia’s venture arm, bringing total capital raised to $550 million and pushing the valuation past $1.3 billion.
The funds are fueling scale—both digital and physical. Lila recently signed a 235,500-square-foot lease in Cambridge, one of the largest lab leases of the year, to house its growing network of automated labs. Inside, banks of robotic arms, liquid handlers, and analytical systems will serve as testbeds for AI-controlled experimentation. Additional facilities are planned for Boston, San Francisco, and London, connecting global hubs of scientific and computational expertise.
It’s an investment not only in algorithms, but in hardware—the centrifuges, incubators, chromatography systems, and imaging tools that make automated discovery possible. The bet is that by owning the full stack, from AI model to pipette tip, Lila can turn science itself into a reproducible, scalable process.
Lila isn’t building a single product; it’s building a research engine. The company plans to operate as a platform provider, giving partners access to its AI models and robotic laboratories rather than selling discoveries outright. It’s a business model that echoes the evolution of cloud computing—shared infrastructure powering many users, each running their own programs.
The model fits an unmet need. Pharmaceutical companies spend billions on early-stage R&D, running high-throughput screens, culturing cell lines, and characterizing molecules with liquid chromatography, mass spectrometry, and microscopy. Energy and materials companies face similar bottlenecks when testing catalysts, polymers, or battery chemistries. These workflows are measurable and repeatable, but not fast.
Lila’s system could change that by handling both the experimental execution and the iterative reasoning that surrounds it. Its closed-loop automation allows hypotheses to be generated, tested, and reformed continuously—producing a constant flow of experimental data that can feed future models.
Unlike contract research organizations, which trade time and labor for output, Lila offers something closer to a learning factory. The more experiments it runs, the more accurate and adaptable its AI becomes. Over time, this feedback loop could evolve into a shared foundation for discovery across industries—much like GPUs became shared infrastructure for machine learning.
The company claims its platform has already run hundreds of thousands of experiments across biology, chemistry, and materials. While those results haven’t yet been peer-reviewed, the examples are intriguing: optimized antibody fragments, gene therapy vectors, non-platinum catalysts for hydrogen production, and sorbents for carbon capture. Each category represents a different industry, but all rely on the same basic act—turning physical experiments into digital learning.
What Lila is attempting sits at the edge of science fiction and scientific necessity. The idea that discovery could become autonomous feels radical, yet it builds on a long lineage of tools designed to remove friction from research—from the first liquid handlers to high-content screening systems and automated DNA synthesizers.
The company’s vision is to make the scientific process itself continuous. AI would generate the questions, machines would perform the tests, and data would flow back into models that never stop improving. If it works, the pace of learning could accelerate exponentially, limited only by compute power, reagent cost, and imagination.
That’s the promise—and the risk. Success depends on proving that AI-driven labs can consistently produce results that hold up to human scrutiny. It also depends on trust: can researchers, investors, and regulators accept discoveries that originate from machines acting on their own reasoning?
Still, the momentum is clear. Venture capital is flowing into the convergence of AI and automation, and Lila has positioned itself at the center of that shift. Its blend of algorithms, robotics, and infrastructure could mark the beginning of a new era in R&D—one where experiments don’t just answer questions, but learn how to ask better ones.
If it succeeds, Lila Sciences won’t just change how research gets done. It might change what we mean by “doing science” at all.