Apple inventa un sistema per evitare il “burn-in” negli AR glasses ed “Headset”

Apple inventa un sistema per evitare il “burn-in” negli AR glasses ed “Headset”

Qualche giorno fa l’ufficio brevetti e marchi degli Stati Uniti ha pubblicato una domanda di brevetto di Apple che si riferisce a una funzionalità progettata principalmente per “AR glasses” ed “Headset” che eviterà il cosidetto “burn-in” del display.

 

 

 

 

 

 

 

 

 

 

 

(Image credit: Martin Hajek/iDropNews)

Il burn-in è quell’effetto che, a seguito della visualizzazione prolungata nella stessa posizione nello schermo di un’immagine, comporta un deterioramento del fosforo dello schermo ( Il display è perfettamente funzionante al verificarsi di questo problema) con formazione di una cosiddetta “immagine fantasma“, un’immagine sbiadita che, indipendentemente da ciò che si visualizza sul display, rimane in sovrimpressione.

 

 

 

 

 

 

 

 

 

 

 

Operazione di visualizzazione basata sull’attività oculare

La domanda di brevetto di Apple riguarda un sistema di monitoraggio oculare progettato per rilevare “saccadi oculari” e battiti di ciglia e quindi apportare le modifiche necessarie alle visualizzazioni degli occhi in tempo reale senza che l’utente sappia che ciò sta accadendo in background.

Le saccadi oculari sono un rapido movimento oculare che porta una regione inizialmente periferica al centro del campo visivo (nella fovea). Gli esseri umani eseguono diversi movimenti oculari saccadici al secondo per utilizzare questa parte della retina ad alta risoluzione per guardare l’oggetto di interesse.

Durante saccadi e battiti di ciglia, la sensibilità visiva dell’utente viene temporaneamente soppressa. I circuiti di controllo dell’headset possono sfruttare la soppressione momentanea della sensibilità visiva dell’utente per apportare modifiche al funzionamento del display come ridurre il consumo di energia, apportare modifiche dell’immagine potenzialmente invadenti, evitare o ridurre gli effetti di burn-in, riducendo così il consumo di energia e migliorare le prestazioni del dispositivo.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Sopra l’immagine mostra un diagramma con un sistema di monitoraggio oculare può raccogliere informazioni sull’occhio di un utente. Il sistema può includere componenti riguardanti il tracciamento dello sguardo, sensori di immagine, fotorilevatori e dispositivi di rilevamento della luce, altre componenti per il monitoraggio dei movimenti oculari.

Come con la maggior parte dei brevetti, osserva che l’invenzione non è limitata ai soli occhiali ma potrebbe anche essere utilizzata in sistemi futuri come display heads-up Mac, tv e altro ancora.

 

 

 

 

 

 

 

 

 

 

Mirko Compagno
AR/VR/MR Architect & UX/UI Designer
Innovation Manager MISE: Sistemi di visualizzazione AR/VR

 

 

 

 

 

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Augmented Reality Solution Supports Surgical Trauma Care

Augmented Reality Solution Supports Surgical Trauma Care

A set of smart surgical glasses with functionality based on augmented reality (AR) and mixed reality (MR) technologies brings a higher level of support to surgical trauma cases.

The Taiwan Main Orthopaedics Biotechnology Co. (Surglasses; Taichung, Taiwan) Foresee-X is a set of smart AR surgical glasses is designed to enhance intra-operative fluoroscopy image synchronization, primarily during orthopedic trauma procedures. Features include image enhancement functions, such as the ability to zoom in and out, allowing surgeons to concentrate on the operational field instead of monitors; reduced radiation exposure for the staff and patient; and improved accuracy by tracking the movements of surgical tools such as puncture needles, trocars, etc.

Image: The Foresee-X augment reality glasses (Photo courtesy of Surglasses)

Image: The Foresee-X augment reality glasses (Photo courtesy of Surglasses)

 

The virtual and actual images are superimposed, and patient bone structure and tissues are fully visible through the smart glasses. In addition to improving overall surgical efficiency, the Foresee-X glasses can reduce OR staff radiation exposure by more than 60% compared to a mobile C-arm used for fluoroscopy. Foresee-X also allows outside observers to view procedures up close through tablet computers, as the device is equipped with an integrated camera with an 80 degree field of view that records video at 30 fps. The device can also collect data for academic purposes.

“The key to smart glasses is the algorithm. Since each person’s eyes have a different focal length, and with the addition of camera lens focus, synchronization would require the aid of high-performance computing,” said Min-Liang Wang, PhD, founder of Surglasses. “Furthermore, if the surgeon changes position during surgery, the image must be adjusted immediately for the new position. All of this can only be achieved by the development of cutting-edge technologies such as 5G and AR/MR.”

“Surglasses has been collaborating with hospitals in Taiwan and Malaysia to set up a specialized trauma center that includes Foresee-X as part of the equipment lineup. The smart surgical glasses are used for numerous kinds of orthopedic procedures including interlocking of nails, pelvic cases, wrists, shoulders, tibia, and many more,” said the company in a press statement. “With accuracy and efficiency as its main advantages, Foresee-X is the first of its kind on the market to provide cutting-edge assistance to surgeons and doctors dealing with trauma cases.”

AR is a term for a live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input. It is related to a general concept called mediated reality, in which a view of reality is modified–possibly even diminished rather than augmented–by a computer. As a result, the technology can enhance the perception of reality.

source: https://www.hospimedica.com/surgical-techniques/articles/294780520/augmented-reality-solution-supports-surgical-trauma-care.html
CES 2020: Samsung Teases Prototype AR Glasses

CES 2020: Samsung Teases Prototype AR Glasses

Are augmented reality personal trainers the future of at-home exercise? 

Samsung is kicked-off the first day of CES 2020 with a bang this morning, offering attendees an in-depth look at a variety of cutting-edge products straight out of a science fiction novel, including a BB8-style robot assistant, as well as new improvements to their proprietary voice assistant, Bixby.

Among the many products developed as part of its “Age of Experience” product strategy, Samsung also used its time on stage to tease its own dedicated AR headset. The company demonstrated its AR technology on stage in front of a live audience using the companies GEMS (Gait Enhancing & Motivating System) technology, which uses an exoskeleton device to correct a user’s posture and track certain body metrics.

The demonstration involved an AR training session involving a digital personal trainer. According to Samsung, these AR glasses can be used to simulate personal gym sessions, mountain climbing, walking underwater, and a variety of other physically intensive activities from the comfort of home.

Of course, it goes without saying that the products shown are still very much in their developmental stage.

“Samsung will remain a hardware company, forever,” said Hyunsuk Kim, CEO of Samsung’s consumer electronics division. “It’s not about when we release the product, but it’s more crucial how much further we can evolve the technology. No other speaker in the world can control gadgets as much as Samsung can.”

Samsung’s Ballie / Image Credit: Samsung

In addition to new AR technology, the company also took the time to shine a light on the long-running Samsung Gear VR with an emotional video showing how the mobile headset is being used to help visually-impaired individuals connect with their families, friends, and loved ones.

With both Apple and Facebook currently in development of their own dedicated AR devices, it’s clear that companies are beginning to see the value in augmented reality headsets as a potential replacement for conventional smartphone technology.

With CES only just getting started, no doubt we’ll be seeing a lot more AR technology over the next couple of days.

Feature Image Credit: Samsung

 

 

 

 

 

Sources: https://vrscout.com/news/ces-2020-samsung-prototype-ar-glasses/

Minority report style interfaces just took a step closer to reality

Minority report style interfaces just took a step closer to reality

Minority report style interfaces just took a step closer to reality

Minority Report has a lot to answer for, not least the stimulus given to a million articles like this about the future of the human-machine interface. Controlling internet-connected devices with gesture and voice is widely seen as the future but nothing has come close to the slick air interface imagined in Steven Spielberg’s 2002 movie.

Google hasn’t cracked it either – but it’s got something that has potential and it’s already inside an actual product, the Pixel 4 phone.

It’s disarmingly simple too and stems from the idea that the hand is the ultimate input device. The hand, would you believe, is “extremely precise, extremely fast”, says Google. Could this human action be finessed into the virtual world?

Google assigned its crack Advanced Technology and Projects team to the task and they concentrated research on radio frequencies. We track massive objects like planes and satellites using radar, so could it be used to track the micro-motions of the human hand?

Turns out that it can. A radar works by transmitting a radio wave toward a target and then the receiver of that radar intercepts the reflected signal from that target. Properties of the reflected signal include energy, time delay and frequency shift which capture information about the object’s characteristics and dynamics such as size, shape, orientation, material, distance and velocity.

The next step is to translate that into interactions with physical devices.

soli5-2-620x349.jpg

Google did this by conceiving Virtual Tools: a series of gestures that mimic familiar interactions with physical tools. Examples include a virtual dial that you turn as if miming turning a volume control. The virtual tools metaphor, suggests Google, makes it easier to communicate, learn, and remember interactions.

While virtual, the interactions also feel physical and responsive. Imagine a button between thumb and index finger. It’s invisible but pressing it means there is natural haptic feedback as your fingers touch. It’s essentially touch but liberated from a 2D surface.

“Without the constraints of physical controls, these virtual tools can take on the fluidity and precision of our natural human hand motion,” Google states.

The good news doesn’t end there. Turns out that radar has some unique properties, compared to cameras, for example. It has very high positional accuracy to sense the tiniest motion, it can work through most materials, it can be embedded into objects and is not affected by light conditions. In Google’s design, there are no moving parts so it’s extremely reliable and consumes little energy and, most important of all, you can shrink it and put it in a tiny chip.

Google started out five years ago with a large bench-top unit including multiple cooling fans but has redesigned and rebuilt the entire system into a single solid-state component of just 8mm x 10mm.

That means the chip can be embedded in wearables, phones, computers, cars and IoT devices and produced at scale.

Google developed two modulation architectures: a Frequency Modulated Continuous Wave (FMCW) radar and a Direct-Sequence Spread Spectrum (DSSS) radar. Both chips integrate the entire radar system into the package, including multiple beam-forming antennas that enable 3D-tracking and imaging.

google-project-soli-pixel-4-100799127-large.jpg

It is making available an SDK to encourage developers to build on its gesture recognition pipeline. The Soli libraries extract real-time signals from radar hardware, outputting signal transformations, high-precision position and motion data and gesture labels and parameters at frame rates from 100 to 10,000 frames per second.

Just imagine the possibilities. In the Pixel 4, Soli is located at the top of the phone and enables hands-free gestures for functions such as silencing alarms, skipping tracks in music and interacting with new Pokémon Pikachu wallpapers. It will also detect presence and is integrated into Google’s Face Unlock 3D facial-recognition technology.

Geoff Blaber, vice president of research for the Americas at analyst CCS Insight, says it’s unlikely to be viewed as game-changing but that marginalises the technology and Google’s ambition for it.

In fact, this radar-based system could underpin a framework for a far wider user interface for any or all digital gadgets. It could be the interface which underpins future versions of Android.

Google has hinted as much. In a web post, Pixel product manager Brandon Barbello said Soli “represents the next step in our vision for ambient computing”.

“Pixel 4 will be the first device with Soli, powering our new Motion Sense features to allow you to skip songs, snooze alarms, and silence phone calls, just by waving your hand. These capabilities are just the start and just as Pixels get better over time, Motion Sense will evolve as well.”

This is a way of describing the volume of internet-connected devices likely to be pervasive in our environment – particularly the smart home – over the next few years. Everything from voice-activated speakers to heating, light control, CCTV and white goods will be linked to the web.

Google makes a bunch of these (from smoke detectors to speakers under its Nest brand) and wants to link them up under its operating system (self-fuelling more data about individuals to refine the user experience). The battle for the smart home will also be fought between Microsoft, Apple, Samsung and Amazon. Soli may be the smart interface that links not just Google products, but perhaps all these systems together.

Of course, it’s early days. The virtual gestures may be intuitive, but we still have to learn to use them; our virtual language needs to be built up. Previous gesture recognition tech like the IR-driven Kinect and the Wii have proved to be an interesting novelty but clunky in practice. Gesture will work best when combined fluently with voice interaction and dovetailed with augmented reality so that we can view and manipulate text, graphics, even video, virtually.

Just like Minority Report – except without the gloves which Tom Cruise’s PreCrime detective wore.

It couldn’t get everything right.

 

 

 

 

source: https://www.redsharknews.com/technology/item/6724-minority-report-style-interfaces-just-took-a-step-closer-to-reality

Sony Is Launching a Location-Based Ghostbusters Training Experience in Augmented Reality

Sony Is Launching a Location-Based Ghostbusters Training Experience in Augmented Reality

We’ve got almost a full year until the next installment of Ghostbusters arrives, but in the meantime, it turns out that Sony is about to launch an augmented reality experience that will let fans use immersive computing to combat the franchise’s whimsical apparitions.

Starting this Saturday, fans who can make it to Tokyo, Japan will be able to play “Ghostbusters Rookie Training” using head-mounted AR devices.

The location-based experience will use a prototype AR headset from Sony, as well as assorted accessories, to give users the power to explore a real-world setting populated by virtual ghosts and demons.

Image by Sony Japan/YouTube

But instead of putting users in a classic single-player situation, the users will all have to work together to accomplish a series of Ghostbuster-related tasks, all while communicating with each other throughout the AR location-based gaming space.

(1) Players in Tokyo demonstrating the AR game, (2) A replica of the Ghostbuster Proton pack, (3) Scene from the promotional video.Images via Ginza Sony Park

And in case there’s any doubt about the depth of the experience, would-be players should be warned that each program is about an hour-long, so only truly devoted Ghostbusters fans should even think of giving this a try.

But that hour-long commitment might be worth it even for non-fans since there’s apparently an appearance by the infamous evil Stay Puft Marshmallow Man.

Image by Sony Japan/YouTube

Sony hasn’t posted much information about how the prototype AR headset works, but based on the video demonstrations the headset appears to use high-end waveguides, which “might” put the headset in the same general class as devices like the HoloLens and the Magic Leap One.

Along with the headset, there are other Ghostbuster-specific props included in the experience that may or may not be interactive controllers of some sort.

Image by Nurture Digital/YouTube

Aside from the Ghostbusters experience, Sony is apparently using the prototype on a couple of other experiences. One experience puts users in an interactive museum of ’60s memorabilia, and the other experience appears to be a concept for an outdoor interactive art project.

Image by Nurture Digital/YouTube

Several scenes in the other concept videos indicate that the headset may also include advanced hand tracking, along with attached earbuds and a large back-mounted module that looks like it might house a battery and some of the device’s computing components.

 

 

 

 

 

 

source: https://next.reality.news/news/sony-is-launching-location-based-ghostbusters-training-experience-augmented-reality-0208432/