From Invisibility Cloaks to AI: How Metamaterials are Revolutionizing Data Centers and Optical Computing

Two Austin and Redmond-based startups are working to apply optical metamaterials—engineered structures that manipulate light in unconventional ways—to solve two of the most pressing challenges in modern computing: the energy demands of artificial intelligence workloads and the bandwidth bottlenecks in hyperscale data centers. Their approaches, while distinct, share a common foundation in advances made over the past two decades in photonic metamaterials, initially explored for applications like invisibility cloaks but now redirected toward practical uses in optical interconnects and computing.

Optical metamaterials derive their properties from subwavelength-scale structures that can bend, steer, or modulate light in ways not possible with natural materials. While early demonstrations focused on steering visible light around objects to achieve cloaking effects, researchers have since identified broader applications in telecommunications, sensing, and computing. For data center operators facing exponential growth in AI-driven traffic, the appeal lies in replacing energy-intensive electronic switches with photonic alternatives that minimize the need for converting signals between optical and electrical domains—a process known as optical-electrical-optical (OEO) conversion, which contributes significantly to latency and power draw.

Lumotive, founded in 2017 and headquartered in Redmond, Washington, has developed a liquid crystal-based optical metasurface chip that can dynamically control the phase, amplitude, and direction of reflected light beams without moving parts. The company states its technology leverages standard semiconductor fabrication processes, integrating copper structures and liquid crystal layers atop a silicon substrate to create a programmable optical interface. According to Lumotive’s public disclosures, the chip was first demonstrated in March 2024 and is designed to function as an optical circuit switch capable of scaling beyond conventional 256×256 port architectures, with theoretical scalability to 10,000×10,000 elements under development. The company plans to begin pilot deployments of its optical switches in late 2026, targeting hyperscale cloud providers seeking to reduce power consumption in AI infrastructure.

Sam Heidari, Lumotive’s CEO, has emphasized that the absence of mechanical components improves long-term reliability compared to MEMS-based optical switches, which can suffer from stiction and wear over time. He also noted that the development process required extensive collaboration with semiconductor foundries to ensure compatibility with CMOS manufacturing lines and to achieve the yield and uniformity necessary for commercial production. While specific performance metrics such as insertion loss, switching speed, or power consumption per port have not been independently verified in peer-reviewed journals as of April 2025, Lumotive has published technical details in Nature regarding the underlying physics of its tunable metasurface design.

In Austin, Neurophos is pursuing a different application of metamaterials: using them to create ultra-dense optical modulators for AI acceleration. The company, co-founded by Patrick Bowen, claims its approach enables the integration of up to 1,000×1,000 optical modulator elements within a 5 mm × 5 mm footprint—an area that, if replicated using conventional silicon photonics, would require a chip approximately one square meter in size. Neurophos states its devices are fabricated entirely using CMOS-compatible processes, avoiding exotic materials such as indium phosphide or lithium niobate, which are common in traditional photonic integrated circuits but complicate manufacturing and increase cost.

According to Bowen, when a laser beam carrying input data strikes the Neurophos chip, the configuration of each metamaterial element alters the phase or amplitude of the reflected light, effectively performing computation through analog optical interference. This allows the chip to execute matrix-vector multiplications—a core operation in neural networks—directly in the optical domain. The company claims its prototype achieves 50 times greater compute density and 50 times better energy efficiency than Nvidia’s Blackwell-generation GPUs, though these figures are based on internal simulations and have not yet been validated through third-party benchmarking or published in independent technical forums as of early 2025.

Neurophos plans to deliver two proof-of-concept chips to hyperscaler customers for evaluation during 2025, with initial system deployments targeted for early 2028 and volume production ramping in mid-2028. The company has not disclosed the specific cloud providers involved in its evaluation program, nor has it released detailed specifications regarding operating wavelengths, modulation bandwidth, or thermal management requirements for its optical AI accelerator.

Both startups operate within a broader industry shift toward photonic computing and optical interconnects, driven by the physical limits of Moore’s Law and the rising energy demands of large-scale AI models. Data centers consumed an estimated 460 terawatt-hours of electricity globally in 2022, according to the International Energy Agency, with AI workloads representing a rapidly growing share of that total. Optical alternatives promise to reduce energy use by eliminating repeated OEO conversions and enabling higher bandwidth density within constrained physical spaces.

However, significant technical hurdles remain. Integrating photonic components with existing electronic control circuits, managing thermal crosstalk in dense arrays, and achieving consistent performance across temperature variations are ongoing challenges. The lack of standardized packaging and testing protocols for photonic AI accelerators complicates adoption, particularly for enterprises relying on established supply chains and validation frameworks.

Industry analysts note that while companies like Lumotive and Neurophos represent promising innovation in applied metamaterials, widespread deployment will depend not only on technical performance but also on ecosystem readiness—including software tools, developer support, and compatibility with existing AI frameworks such as TensorFlow and PyTorch. As of April 2025, neither company has announced partnerships with major AI software providers or released public benchmarks demonstrating end-to-end performance on standard machine learning workloads.

The first commercial optical switches from Lumotive are expected to undergo field trials with select hyperscalers in late 2026, with broader availability contingent on successful validation of reliability, power efficiency, and scalability under real-world data center conditions. Neurophos anticipates completing its proof-of-concept evaluations by the end of 2025, after which it will refine its design for manufacturability and begin pilot system integration in 2027.

For readers interested in tracking developments in photonic computing and optical metamaterials, updates are typically shared through company press releases, technical conferences such as the Optical Fiber Communication Conference (OFC) and the International Electron Devices Meeting (IEDM), and peer-reviewed journals including Nature Photonics, IEEE Journal of Selected Topics in Quantum Electronics, and Optica. Regulatory filings with the U.S. Securities and Exchange Commission may also provide insight into funding milestones and strategic partnerships as these startups advance toward commercialization.

What do you think about the potential of optical metamaterials to reshape AI infrastructure? Share your thoughts in the comments below, and consider sharing this article with colleagues interested in the future of energy-efficient computing.

Leave a Comment