u/AR_MR_XR

Active noise cancellation on open-ear smart glasses

Active noise cancellation on open-ear smart glasses

Smart glasses are becoming an increasingly prevalent wearable platform, with audio as a key interaction modality. However, hearing in noisy environments remains challenging because smart glasses are equipped with open-ear speakers that do not seal the ear canal. Furthermore, the open-ear design is incompatible with conventional active noise cancellation (ANC) techniques, which rely on an error microphone inside or at the entrance of the ear canal to measure the residual sound heard after cancellation. Here we present the first real-time ANC system for open-ear smart glasses that suppresses environmental noise using only microphones and miniaturized open-ear speakers embedded in the glasses frame. Our low-latency computational pipeline estimates the noise at the ear from an array of eight microphones distributed around the glasses frame and generates an anti-noise signal in real-time to cancel environmental noise. We develop a custom glasses prototype and evaluate it in a user study across 8 environments under mobility in the 100--1000 Hz frequency range, where environmental noise is concentrated. We achieve a mean noise reduction of 9.6 dB without any calibration, and 11.2 dB with a brief user-specific calibration.

Paper: https://arxiv.org/abs/2604.05519

u/AR_MR_XR — 7 hours ago

Meta Introduces Muse Spark: A Multimodal Reasoning Model Coming to Smartglasses

Meta Superintelligence Labs has launched Muse Spark, the first model in its new Muse series, officially replacing the Llama architecture for its on-device ecosystem. Because it is engineered to be small and fast, Meta avoids the traditional large language model label, instead defining it as "a natively multimodal reasoning model with support for tool-use, visual chain of thought, and multi-agent orchestration."

Unlike text-first models, Muse Spark is built from the ground up for immediate visual perception and complex problem-solving, capable of launching multiple subagents in parallel to handle distinct tasks. Rolling out to Meta's AI glasses in the coming weeks, this architecture allows the onboard assistant to actively see and interpret the wearer’s environment. Practical applications include real-time object identification, visual coding, and surfacing location-based context directly from platforms like Threads and Instagram, eliminating the need for the user to verbally explain what they are looking at.

Introducing Muse Spark: Scaling Towards Personal Superintelligence

Introducing Muse Spark: MSL’s First Model, Purpose-Built to Prioritize People

u/AR_MR_XR — 15 hours ago

The End for ImaginAR? Days after losing its lawsuit against Niantic, ImaginAR shutters its AR platform

ImagineAR has abruptly paused its platform operations following a federal court's dismissal of its lawsuit against Niantic, which ruled that broad concepts like GPS-triggered AR content cannot be patented without unique technical implementation. Here's the ImaginAR press release:

ImagineAR today announced a strategic operational update in response to current market conditions.

The Company has elected to temporarily suspend active operations of its augmented reality platform (the "AR Platform") as part of a broader initiative to optimize capital allocation and enhance long-term shareholder value. The AR Platform will be maintained in a ready state, preserving the ability to re-engage operations as market conditions evolve.

During this period, ImagineAR will focus its resources on strengthening and expanding its intellectual property portfolio, while actively pursuing strategic partnerships, licensing opportunities, and transactions aligned with high-growth technology sectors.

Management believes this disciplined approach positions the Company to maximize the value of its technology assets while maintaining flexibility to capitalize on future market opportunities.

About ImagineAR

Imagine AR Inc. (CSE: IP) (OTCQB: IPNFF) has developed an "AR-as-a-Service" platform that enables sports teams and organizations of any size to create and implement their own AR campaigns with no programming or technology experience. Every organization, from professional sports franchises to small retailers, can develop interactive AR campaigns that blend the real and digital worlds using ImagineAR. Customers simply point their mobile device at logos, signs, buildings, products, landmarks and more to instantly engage with videos, information, advertisements, coupons, 3D holograms and any interactive content, all hosted in the cloud and managed using a menu-driven portal. Integrated real-time analytics means that all customer interaction is tracked and measured in real-time. The ImagineAR mobile app is available in the IOS and Android mobile app stores. The platform is available as a native mode software development kit ("SDK").

For more information or to explore working with ImagineAR, please email info@imaginear.com, or visit www.imagineAR.com.

reddit.com
u/AR_MR_XR — 15 hours ago
▲ 33 r/AR_MR_XR+1 crossposts

Snap's new AR Glasses will be powered by Snapdragon

Today, Specs Inc., a Snap subsidiary, and Qualcomm Technologies, Inc. announced a multi-year strategic agreement to power future generations of Specs with Qualcomm Technologies’ industry-leading Snapdragon system-on-a-chip (SoC).

This is the first flagship engagement for Specs Inc., which is launching Specs, advanced eyewear that seamlessly integrates digital experiences into the physical world, for consumers later this year. Specs are standalone, see-through glasses that bring the digital world to you, allowing you to see, hear, and interact with digital content just like it’s in your physical space.

Specs are powered by Snapdragon XR platforms. By combining edge AI and high performance, low-power compute, Snapdragon platforms provide the foundation that enables intelligent, context‑aware experiences to run directly on-device, for faster and more private interactions. This strategic initiative builds on both companies’ commitment to making computing more human and more seamlessly integrated into everyday life, transforming the way the world works, learns, and plays together.

Snap Inc. and Qualcomm Technologies have a strong track record of powering advanced immersive technology. This agreement builds on more than five years of innovation and collaboration, as Snapdragon platforms have powered multiple previous generations of Snap’s Spectacles.

Through long-term strategic roadmap alignment and technical collaboration, both companies will work together to rapidly bring industry-leading capabilities to the Specs platform, including on-device AI, cutting-edge graphics, and advanced multiuser digital experiences.

The joint initiative establishes a scalable foundation for the growing community of developers and partners building for Specs, supporting a predictable product cadence and enabling the creation of increasingly sophisticated digital experiences over time.

“We believe the future of computing will be more human and grounded in the real world," said Evan Spiegel, co-founder and CEO, Snap Inc. “Our work with Qualcomm provides a strong foundation for the future of Specs, bringing developers and consumers advanced technology and performance that pushes the boundaries of what’s possible.”

“The next era of computing will be defined by devices that understand what you see, hear and say as well as context, and respond instantly to the world around you,” said Cristiano Amon, President and Chief Executive Officer, Qualcomm Incorporated. “Our work on future generations of Specs will enable power-efficient interactive AR devices that deliver agentic experiences that feel natural, intuitive and integrate seamlessly into daily life.”

u/AR_MR_XR — 1 day ago
▲ 32 r/AR_MR_XR+2 crossposts

XREAL's Most Affordable Glasses EVER Are Coming

XREAL is preparing to launch a new pair of AR glasses, and the main goal is to lower the price. These are not going to compete with the current XREAL 1S or One Pro. Instead, they will be part of the Air series. The strategy is straightforward: they want to lower the barrier to entry, reach the mass market, and take more market share. By doing this, they can scale up production and lower the cost per unit.

This mass market push also means they can expand to new countries.

To reach a true budget price, they obviously have to make some hardware cuts. Here is what they could change:

  • X1 Chip: The One series uses this for built-in 3DoF tracking, but keeping it out of this new Air model is a major way to keep costs down.
  • Microdisplays: Instead of the expensive Sony microdisplays, they could switch to less expensive panels from BOE, Seeya, or Sidtek.

What features do you think they will sacrifice? And what country do you hope they launch in next?

u/AR_MR_XR — 1 day ago

Scaling Up: North Ocean Raises $60M to Supply Waveguides for 200K AR Glasses in 2026

Shanghai-based optics manufacturer North Ocean Photonics has just closed a massive C+ funding round of nearly 400 million RMB, signaling major capital confidence in the scaling of Wafer-Level Optics (WLO).

According to an exclusive report broken by Huaxin Capital, this new round was led by CITIC Zhengye, Yunfeng Capital, and Haiwang Capital, with continued participation from existing backers. To date, the company has raised nearly 1 billion RMB, with Huawei’s Hubble Investments notably stepping in early back in 2019.

North Ocean Photonics is one of the few global players operating on an IDM (Integrated Device Manufacturer) model capable of producing Wafer-Level Optics at a scale of tens of millions of units. While they also supply 3D sensing and automotive LiDAR components, AR diffractive optical waveguides are a massive focus.

The company currently claims the top spot for domestic shipments of AR optical waveguides in China. With this new influx of cash, they are upgrading their Lingang production base to an annual capacity of 10 million units across all product lines.

Specifically for the AR market, North Ocean Photonics is setting an aggressive target: shipping waveguides for over 200,000 glasses in 2026. As the industry continues to battle the "make it good, make it cheap, make it scalable" trilemma of AR optics, this level of capacity expansion from a major supplier is a strong indicator that the hardware supply chain is bracing for a significant bump in consumer smart glasses volume over the next couple of years.

^(Source: Huaxin Capital Semiconductor Group)

u/AR_MR_XR — 2 days ago

iFLYTEK Showcases Display AI Glasses

iFLYTEK showcased its AI Glasses and AI Interpret Mic at GITEX ASIA 2026. Alongside the new devices, the company presented its broader AI translation portfolio, demonstrating how advanced AI helps break down language barriers and enable intelligent communication across industries and everyday life. Powered by large-model AI, the portfolio underscores iFLYTEK’s focus on delivering accurate, secure, and scalable multilingual interaction in real-world scenarios.

AI Glasses for Face-to-Face Communication

Designed for international business environments, the iFLYTEK AI Glasses integrate real-time AI vision and speech translation to support seamless multilingual interaction. The glasses feature a first-of-its-kind multimodal noise reduction system with lip-reading recognition, allowing the device to accurately identify the active speaker and filter background noise in complex, multi-person conversations. Weighing just 40 grams, about 20% lighter than comparable products, they offer a lightweight and comfortable design for all-day wear.

AI Interpret Mic for Professional Conferences

The AI Interpret Mic is a simultaneous interpretation microphone combining high-precision speech recognition with real-time translation. It is designed for multilingual conferences and integrates directly with conference systems to support synchronized cross-language communication in professional event settings.

Building a Comprehensive AI Translation Ecosystem

Beyond the newly launched devices, iFLYTEK’s AI translation capabilities extend across a wide range of real-world scenarios. In daily office settings, AINOTE integrates AI-powered recording and transcription to improve note-taking efficiency. For users on the move, iFLYTEK AI Watch offers a lightweight, always-available way to capture conversations, with built-in transcription and AI-generated summaries that turn moments into actionable insights.

For cross-language meetings and calls, AI Translation Earbuds enable natural, real-time communication. In business travel scenarios, the Smart Translator supports instant multilingual interaction. At large-scale conferences and international forums, AI Interpreta delivers enterprise-level simultaneous interpretation, while the AI Translation Screen supports public services and tourist destinations with a dual-sided transparent display showing bilingual content simultaneously. The lineup also includes the Bavvo app for everyday translation needs, as well as the AI Recorder, which further enhances productivity by converting spoken content into usable text with real-time transcription and translation.

Together, these applications reflect iFLYTEK’s strategy of building a full-scenario AI translation framework, supporting communication from individual productivity to global events.

These capabilities are built on iFLYTEK’s 26 years of expertise in speech and language technologies. Its machine translation system has completed national-level evaluation and performed strongly in international spoken-language benchmarks, reflecting the company’s continued focus on advancing secure and scalable multilingual AI.

“Clear communication is the cornerstone of global collaboration,” said Vincent Zhan, Vice President of iFLYTEK. “With our AI translation technologies, we’re helping people and businesses connect with greater clarity and confidence worldwide.”

iFLYTEK’s AI translation portfolio is showcased April 9–10 at Booth HB-A80 at GITEX ASIA 2026. Visitors can also explore the company’s AI infrastructure and AI solutions, and see how these technologies support enterprise innovation and everyday productivity.

Learn more at: https://www.iflytek.com/en/index.html

^(Source: iFLYTEK)

u/AR_MR_XR — 2 days ago

Huawei teases camera AI Glasses launch

Huawei Consumer Business Group CEO He Gang shared a photo on social media with a "HUAWEI AI Glasses" watermark clearly visible in the bottom left corner. Huawei has talked about AI glasses before, but this is the first time we've seen solid photographic proof, and it definitely confirms they pack a camera.

Huawei is expected to host a product launch event on April 21.

What else are the leaks saying about the upcoming Huawei AI Glasses: The upcoming Huawei AI Smart Glasses will feature a customized design and come with new, highly durable hinge technology supplied by a subsidiary of Yutong Technology. Huawei is currently preparing an estimated shipment of 400,000 to 500,000 units.

What we've learned from this is that Huawei is focusing on structural improvements for its next-generation smart glasses while preparing for a substantial product launch volume.

^(Sources:) ^(huaweicentral.com)^(, Micro-Display)

u/AR_MR_XR — 2 days ago

Niantic Spatial launches Scaniverse and VPS 2.0

World models are advancing rapidly – but most are trained on text and images. Operating in the physical world requires something different: models with precise coordinates and geometry to make environments navigable and machine-readable. That matters for the 80% of the economy that happens outside of digital screens.

Niantic Spatial is building that foundation: a living model of the world that people and machines can talk to. Today we're launching Scaniverse for businesses as the front door to our spatial intelligence services and Large Geospatial Model.

Capturing a space and knowing exactly where you are within it are two different problems. Most companies solve one. Niantic Spatial creates both geometrically accurate and spatially grounded models that allow machines to understand and interact with the physical world.

Here’s what we’re launching:

  • Scaniverse: An integrated web and mobile platform that captures 3D spaces – small and large – supporting multiple devices, to generate visual positioning maps, meshes, and Gaussian splats.
  • VPS 2.0: Precise visual positioning that now works at global scale – no prior scanning required. In places mapped with Scaniverse, VPS delivers near centimeter-accurate 6DoF localization – full position and orientation. Everywhere else, it corrects GPS errors and dropout to provide improved, reliable positioning and heading, especially in GPS degraded environments.

Continue reading on nianticspatial.com/blog/scaniverse

u/AR_MR_XR — 2 days ago
▲ 43 r/AR_MR_XR+1 crossposts

Are You Ready to Test Some Smartglasses?!

MemoMind is starting a Beta Test Program. Here's what they wrote:

We're offering a limited number of MemoMind One AI glasses to Reddit mods, tech reviewers, and regular contributors before they launch on Kickstarter on May 21st. Register to become one of our beta testers and provide your honest feedback. Skeptics welcome. If you've used smart glasses and have opinions, even better. Sound good? Read on.

We're MemoMind, an AI glasses company incubated by XGIMI, the display technology company behind some of the world's most acclaimed projectors. After a decade of building precision optical systems, XGIMI channeled that same engineering expertise into a single question: What if we put a world-class display on your face?

We didn't stumble into optics. We grew up in it.

We just won 9 awards at CES 2026, including Best Wearable from Android Central and Variety and Best in Show from PC Mag. At MWC 2026, we added even more awards and had people walking up to our booth ready to buy.

What sets us apart is a deliberate combination: a no-camera design for real privacy, multi-LLM processing, onboard Harman Kardon speakers, and a 16+ hour battery life.

We are looking for participants who:

- Have a strong interest in AI hardware and possess extensive experience with such devices.

- Are active on social media and engaged in relevant tech communities.

- Are willing to use the device regularly in various scenarios (e.g., commuting, working, learning) and provide detailed, structured feedback on their experience.

- Can communicate their thoughts clearly and constructively with our product and engineering teams.

What you get:

- Early access to MemoMind One before the Kickstarter goes live

- Direct line to our product team — your feedback shapes what ships

- First look at features we haven't announced publicly yet

- Be recognized as a Founding Tester and a founding member of our community.

- Receive our exclusive gift pack specifically for testers.

One small ask before you apply:

If you do test MemoMind One, your feedback and content might be genuinely useful to others in making their decision. We want to be upfront about how we might use it, and we want you in control of that.

When we ask you to fill out the form, we'll include a simple permissions form. You'll see your Reddit handle and four yes/no choices: Kickstarter campaign, website, organic social media, and paid advertising. Each one is independent. Say yes to all of them, none of them, or anything in between. We will never use your name, handle, or content beyond what you approve, and you can change your mind at any time by emailing us directly.

Apply here and good luck!

The MemoMind Team

u/AR_MR_XR — 3 days ago

SONY PlayStation starts pilot-project to 3D scan users and bring them into the games

An interesting first step that will hopefully lead to many experiences where users can step into experiences as their own avatar. Including real-world AR 🙏 For now, this pilot project is about including only one user in the official GT7, if I get that right? Nevertheless, it is something. With the recent acquisition of that generative volumetric media startup by PlayStation, this could be a signal that they still push towards the >!metaverse 🙊!< I mean real-world avatars. ^(1)

Bringing PlayStation’s biggest fans into blockbuster PlayStation Studios games

1: SONY SciFi Prototyping: ONE DAY, 2050 | Jobbing & Working on YouTube

u/AR_MR_XR — 3 days ago
▲ 16 r/AR_MR_XR+1 crossposts

Judge decides Niantic did not build its empire on ImagineAR patents

On April 7, 2026, the U.S. District Court for the District of Delaware dismissed ImagineAR’s patent infringement lawsuit against Niantic. Judge Joshua D. Wolson granted Niantic’s motion for judgment on the pleadings, ruling that ImagineAR’s patents were legally invalid because they were directed at abstract ideas rather than technical inventions. The court found that the concept of tailoring virtual content to a user’s location lacked the "inventive concept" required for patent eligibility under 35 U.S.C. §101. This decision follows a previous ruling in the case that had already dismissed claims of willful infringement.

This ruling clarifies that broad concepts like GPS-triggered AR content are not patentable without a specific, unique technical implementation. For the AR industry, it establishes a higher bar for intellectual property claims and prevents individual companies from claiming ownership over the basic mechanics of location-based spatial computing.

Source: news.bloomberglaw.com

u/AR_MR_XR — 3 days ago

INMO GO3 via Kickstarter ... or ... MOVA smartglasses

MOVA x INMO seem to be quietly teaming up to bring the GO3 smart glasses and smart ring to broader markets, including the US. It is an interesting move: MOVA expands from home robotics to a full "smart living" ecosystem, while INMO gets the leverage to push far beyond Kickstarter. This isn't an officially announced partnership, but see for yourself 😉 The image on top is INMO, the image below is MOVA. The difference seems to be in the accessories. INMO GO3 has a few more.

>About MOVA: MOVA (mova.tech) is a global, Dreame-owned smart home appliance brand founded in 2024, specializing in AI-powered cleaning products, including robot vacuums, wet/dry vacuums, and robot lawn mowers. The company, which has a strong focus on European and Asian markets with expansion into North America, aims to provide high-performance, user-centric technology.

MOVA Launches Smart Ring H1 and Smart Glasses S1

INMO Unveils GO3, Next-Generation Everyday AI Smart Glasses Launching on Kickstarter

INMO GO3 Crowdfunding

INMO GO3 Introduction

u/AR_MR_XR — 3 days ago