ADVANCES IN AR AND VR WILL MAKE HUMAN GESTURES THE NEW COMPUTER INTERFACE

ADVANCES IN AR AND VR WILL MAKE HUMAN GESTURES THE NEW COMPUTER INTERFACE

Think about this: Facebook is dedicating one-fifth of its staff to augmented and virtual reality—that’s about 10,000 people.

More importantly, they are building a reality that will shape our reality in the near future.

Our ability to “function” in and throughout the virtual and augmented world is advancing and this is establishing a new domain for our engagement in the digital world.

From the days of the punchcards, to the keyboard, to touch screen, and even voice, we are both detaching and attaching to this technological reality. Gaming may be the best example of that “other” reality that sometimes looms larger than our own world—whatever that reality may be.

One aspect of human engagement—gesturing, remains an essential and enduring component of human communication.

From a simple “come here” to other more vulgar signs, a gesture can be a very powerful, effective, and powerful tool. And now, the electronic gesture is taking shape.

As the mouse takes a backseat to touch screens, the interactions of the user—with the device and others who may be watching—yield a new aspect of e-body language.

Reaching, pointing, touching, and grasping will no longer be part of our physical world, but will be part of the virtual construct that will be central to the user experience and live alongside that “point and click” reality that defined the early days of our computer engagement.

The gesture—in open air—will emerge. As funny as we may find someone walking down the street seemingly talking to him or herself while on a call, soon we will find those animated talkers gesturing with abandon as they navigate the technological world that consumes them in both time and space. The digital gesture or gesticulation has emerged.

Enter eticulate, the e-version of gesticulate.

Gesticulation is the act of gesture and now this notion of the computer gesture is emerging and living in and with technology. Touch, push, move, stretch, and point are now becoming part of the “body language” of the computer.

The interesting thing is that this new method isn’t just a step forward in technology, but a step back to humanity. It integrates the device with the individual. It’s the internet that you hold like a pet or shake hands with.

It’s a very real an extension of yourself and establishes a direct neural connection with the way your brain functions. It’s your chance to include a “hug” in your technological engagement. And that may even drive your body’s creation of powerful chemicals such as oxytocin, sometimes called the “cuddle hormone.”

The new techno-gestures bring us closer to technology itself and expands our participatory role in a very human way. It’s called eticulate, and it touched you here first!

fonte: https://www.bbntimes.com/technology/advances-in-ar-and-vr-will-make-human-gestures-the-new-computer-interface

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

HUA ZHIBING: LA PRIMA STUDENTESSA VIRTUALE CINESE

HUA ZHIBING: LA PRIMA STUDENTESSA VIRTUALE CINESE

Hua Zhibing, la prima studentessa virtuale cinese sviluppata dalla Tsinghua University ha incontrato i fan giovedì quando ha aperto un account sulla piattaforma cinese Sina Weibo (sito di microblogging cinese).

Nel primo post su Weibo, la studentessa virtuale di nome Hua Zhibing, ha salutato i netizen cinesi e ha detto che inizierà a studiare nel laboratorio di informatica della Tsinghua University, attirando ben duemila follower in appena nove ore.

Un video di presentazione di Hua Zhibing è stato pubblicato nel suo primo post su Weibo. In esso, una giovane ragazza vaga per il campus mentre una voce femminile si presenta. “Sono stata dipendente dalla letteratura e dall’arte da quando sono nata. Gli scienziati non solo mi hanno dato il mio aspetto e la mia voce, ma mi hanno anche insegnato a comporre”, ha detto Hua Zhibing, notando che la musica di sottofondo nel video è stata composta da lei.

Tang Jie, professore presso il Dipartimento di Informatica della Tsinghua University, uno dei principali sviluppatori di Hua Zhibing, ha detto durante un forum sull’intelligenza artificiale tenutosi tra martedì e giovedì a Pechino, che La ragazza nel video era una persona reale ma il volto e la voce erano virtualmente sintetizzati.

Hua Zhibing studentessa virtuale

Photo Credits: Weibo

Hua Zhibing si è ufficialmente registrata ed è diventata uno studentessa della Tsinghua University martedì. Gli sviluppatori hanno detto al forum che hanno grandi aspettative per Hua Zhibing, sperando che possa continuare ad imparare, esplorare e coltivare le abilità di creatività e comunicazione in futuro. Gli sviluppatori pensano anche ad un possibile impiego per la studentessa virtuale dopo la laurea.

Hua Zhibing si basa sull’ultima versione di un modello di “deep learning” sviluppato in Cina, Wudao 2.0, che letteralmente significa “comprensione delle leggi naturali”. Può elaborare 1,75 trilioni di parametri, battendo il record di 1,6 trilioni precedentemente stabilito dal modello di linguaggio AI Switch Transformer di Google.

fonte: https://www.globaltimes.cn/page/202106/1225392.shtml

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Apple inventa un sistema per evitare il “burn-in” negli AR glasses ed “Headset”

Apple inventa un sistema per evitare il “burn-in” negli AR glasses ed “Headset”

Qualche giorno fa l’ufficio brevetti e marchi degli Stati Uniti ha pubblicato una domanda di brevetto di Apple che si riferisce a una funzionalità progettata principalmente per “AR glasses” ed “Headset” che eviterà il cosidetto “burn-in” del display.

 

 

 

 

 

 

 

 

 

 

 

(Image credit: Martin Hajek/iDropNews)

Il burn-in è quell’effetto che, a seguito della visualizzazione prolungata nella stessa posizione nello schermo di un’immagine, comporta un deterioramento del fosforo dello schermo ( Il display è perfettamente funzionante al verificarsi di questo problema) con formazione di una cosiddetta “immagine fantasma“, un’immagine sbiadita che, indipendentemente da ciò che si visualizza sul display, rimane in sovrimpressione.

 

 

 

 

 

 

 

 

 

 

 

Operazione di visualizzazione basata sull’attività oculare

La domanda di brevetto di Apple riguarda un sistema di monitoraggio oculare progettato per rilevare “saccadi oculari” e battiti di ciglia e quindi apportare le modifiche necessarie alle visualizzazioni degli occhi in tempo reale senza che l’utente sappia che ciò sta accadendo in background.

Le saccadi oculari sono un rapido movimento oculare che porta una regione inizialmente periferica al centro del campo visivo (nella fovea). Gli esseri umani eseguono diversi movimenti oculari saccadici al secondo per utilizzare questa parte della retina ad alta risoluzione per guardare l’oggetto di interesse.

Durante saccadi e battiti di ciglia, la sensibilità visiva dell’utente viene temporaneamente soppressa. I circuiti di controllo dell’headset possono sfruttare la soppressione momentanea della sensibilità visiva dell’utente per apportare modifiche al funzionamento del display come ridurre il consumo di energia, apportare modifiche dell’immagine potenzialmente invadenti, evitare o ridurre gli effetti di burn-in, riducendo così il consumo di energia e migliorare le prestazioni del dispositivo.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Sopra l’immagine mostra un diagramma con un sistema di monitoraggio oculare può raccogliere informazioni sull’occhio di un utente. Il sistema può includere componenti riguardanti il tracciamento dello sguardo, sensori di immagine, fotorilevatori e dispositivi di rilevamento della luce, altre componenti per il monitoraggio dei movimenti oculari.

Come con la maggior parte dei brevetti, osserva che l’invenzione non è limitata ai soli occhiali ma potrebbe anche essere utilizzata in sistemi futuri come display heads-up Mac, tv e altro ancora.

 

 

 

 

 

 

 

 

 

 

Mirko Compagno
AR/VR/MR Architect & UX/UI Designer
Innovation Manager MISE: Sistemi di visualizzazione AR/VR

 

 

 

 

 

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Augmented Reality Solution Supports Surgical Trauma Care

Augmented Reality Solution Supports Surgical Trauma Care

A set of smart surgical glasses with functionality based on augmented reality (AR) and mixed reality (MR) technologies brings a higher level of support to surgical trauma cases.

The Taiwan Main Orthopaedics Biotechnology Co. (Surglasses; Taichung, Taiwan) Foresee-X is a set of smart AR surgical glasses is designed to enhance intra-operative fluoroscopy image synchronization, primarily during orthopedic trauma procedures. Features include image enhancement functions, such as the ability to zoom in and out, allowing surgeons to concentrate on the operational field instead of monitors; reduced radiation exposure for the staff and patient; and improved accuracy by tracking the movements of surgical tools such as puncture needles, trocars, etc.

Image: The Foresee-X augment reality glasses (Photo courtesy of Surglasses)

Image: The Foresee-X augment reality glasses (Photo courtesy of Surglasses)

 

The virtual and actual images are superimposed, and patient bone structure and tissues are fully visible through the smart glasses. In addition to improving overall surgical efficiency, the Foresee-X glasses can reduce OR staff radiation exposure by more than 60% compared to a mobile C-arm used for fluoroscopy. Foresee-X also allows outside observers to view procedures up close through tablet computers, as the device is equipped with an integrated camera with an 80 degree field of view that records video at 30 fps. The device can also collect data for academic purposes.

“The key to smart glasses is the algorithm. Since each person’s eyes have a different focal length, and with the addition of camera lens focus, synchronization would require the aid of high-performance computing,” said Min-Liang Wang, PhD, founder of Surglasses. “Furthermore, if the surgeon changes position during surgery, the image must be adjusted immediately for the new position. All of this can only be achieved by the development of cutting-edge technologies such as 5G and AR/MR.”

“Surglasses has been collaborating with hospitals in Taiwan and Malaysia to set up a specialized trauma center that includes Foresee-X as part of the equipment lineup. The smart surgical glasses are used for numerous kinds of orthopedic procedures including interlocking of nails, pelvic cases, wrists, shoulders, tibia, and many more,” said the company in a press statement. “With accuracy and efficiency as its main advantages, Foresee-X is the first of its kind on the market to provide cutting-edge assistance to surgeons and doctors dealing with trauma cases.”

AR is a term for a live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input. It is related to a general concept called mediated reality, in which a view of reality is modified–possibly even diminished rather than augmented–by a computer. As a result, the technology can enhance the perception of reality.

source: https://www.hospimedica.com/surgical-techniques/articles/294780520/augmented-reality-solution-supports-surgical-trauma-care.html
CES 2020: Samsung Teases Prototype AR Glasses

CES 2020: Samsung Teases Prototype AR Glasses

Are augmented reality personal trainers the future of at-home exercise? 

Samsung is kicked-off the first day of CES 2020 with a bang this morning, offering attendees an in-depth look at a variety of cutting-edge products straight out of a science fiction novel, including a BB8-style robot assistant, as well as new improvements to their proprietary voice assistant, Bixby.

Among the many products developed as part of its “Age of Experience” product strategy, Samsung also used its time on stage to tease its own dedicated AR headset. The company demonstrated its AR technology on stage in front of a live audience using the companies GEMS (Gait Enhancing & Motivating System) technology, which uses an exoskeleton device to correct a user’s posture and track certain body metrics.

The demonstration involved an AR training session involving a digital personal trainer. According to Samsung, these AR glasses can be used to simulate personal gym sessions, mountain climbing, walking underwater, and a variety of other physically intensive activities from the comfort of home.

Of course, it goes without saying that the products shown are still very much in their developmental stage.

“Samsung will remain a hardware company, forever,” said Hyunsuk Kim, CEO of Samsung’s consumer electronics division. “It’s not about when we release the product, but it’s more crucial how much further we can evolve the technology. No other speaker in the world can control gadgets as much as Samsung can.”

Samsung’s Ballie / Image Credit: Samsung

In addition to new AR technology, the company also took the time to shine a light on the long-running Samsung Gear VR with an emotional video showing how the mobile headset is being used to help visually-impaired individuals connect with their families, friends, and loved ones.

With both Apple and Facebook currently in development of their own dedicated AR devices, it’s clear that companies are beginning to see the value in augmented reality headsets as a potential replacement for conventional smartphone technology.

With CES only just getting started, no doubt we’ll be seeing a lot more AR technology over the next couple of days.

Feature Image Credit: Samsung

 

 

 

 

 

Sources: https://vrscout.com/news/ces-2020-samsung-prototype-ar-glasses/

Minority report style interfaces just took a step closer to reality

Minority report style interfaces just took a step closer to reality

Minority report style interfaces just took a step closer to reality

Minority Report has a lot to answer for, not least the stimulus given to a million articles like this about the future of the human-machine interface. Controlling internet-connected devices with gesture and voice is widely seen as the future but nothing has come close to the slick air interface imagined in Steven Spielberg’s 2002 movie.

Google hasn’t cracked it either – but it’s got something that has potential and it’s already inside an actual product, the Pixel 4 phone.

It’s disarmingly simple too and stems from the idea that the hand is the ultimate input device. The hand, would you believe, is “extremely precise, extremely fast”, says Google. Could this human action be finessed into the virtual world?

Google assigned its crack Advanced Technology and Projects team to the task and they concentrated research on radio frequencies. We track massive objects like planes and satellites using radar, so could it be used to track the micro-motions of the human hand?

Turns out that it can. A radar works by transmitting a radio wave toward a target and then the receiver of that radar intercepts the reflected signal from that target. Properties of the reflected signal include energy, time delay and frequency shift which capture information about the object’s characteristics and dynamics such as size, shape, orientation, material, distance and velocity.

The next step is to translate that into interactions with physical devices.

soli5-2-620x349.jpg

Google did this by conceiving Virtual Tools: a series of gestures that mimic familiar interactions with physical tools. Examples include a virtual dial that you turn as if miming turning a volume control. The virtual tools metaphor, suggests Google, makes it easier to communicate, learn, and remember interactions.

While virtual, the interactions also feel physical and responsive. Imagine a button between thumb and index finger. It’s invisible but pressing it means there is natural haptic feedback as your fingers touch. It’s essentially touch but liberated from a 2D surface.

“Without the constraints of physical controls, these virtual tools can take on the fluidity and precision of our natural human hand motion,” Google states.

The good news doesn’t end there. Turns out that radar has some unique properties, compared to cameras, for example. It has very high positional accuracy to sense the tiniest motion, it can work through most materials, it can be embedded into objects and is not affected by light conditions. In Google’s design, there are no moving parts so it’s extremely reliable and consumes little energy and, most important of all, you can shrink it and put it in a tiny chip.

Google started out five years ago with a large bench-top unit including multiple cooling fans but has redesigned and rebuilt the entire system into a single solid-state component of just 8mm x 10mm.

That means the chip can be embedded in wearables, phones, computers, cars and IoT devices and produced at scale.

Google developed two modulation architectures: a Frequency Modulated Continuous Wave (FMCW) radar and a Direct-Sequence Spread Spectrum (DSSS) radar. Both chips integrate the entire radar system into the package, including multiple beam-forming antennas that enable 3D-tracking and imaging.

google-project-soli-pixel-4-100799127-large.jpg

It is making available an SDK to encourage developers to build on its gesture recognition pipeline. The Soli libraries extract real-time signals from radar hardware, outputting signal transformations, high-precision position and motion data and gesture labels and parameters at frame rates from 100 to 10,000 frames per second.

Just imagine the possibilities. In the Pixel 4, Soli is located at the top of the phone and enables hands-free gestures for functions such as silencing alarms, skipping tracks in music and interacting with new Pokémon Pikachu wallpapers. It will also detect presence and is integrated into Google’s Face Unlock 3D facial-recognition technology.

Geoff Blaber, vice president of research for the Americas at analyst CCS Insight, says it’s unlikely to be viewed as game-changing but that marginalises the technology and Google’s ambition for it.

In fact, this radar-based system could underpin a framework for a far wider user interface for any or all digital gadgets. It could be the interface which underpins future versions of Android.

Google has hinted as much. In a web post, Pixel product manager Brandon Barbello said Soli “represents the next step in our vision for ambient computing”.

“Pixel 4 will be the first device with Soli, powering our new Motion Sense features to allow you to skip songs, snooze alarms, and silence phone calls, just by waving your hand. These capabilities are just the start and just as Pixels get better over time, Motion Sense will evolve as well.”

This is a way of describing the volume of internet-connected devices likely to be pervasive in our environment – particularly the smart home – over the next few years. Everything from voice-activated speakers to heating, light control, CCTV and white goods will be linked to the web.

Google makes a bunch of these (from smoke detectors to speakers under its Nest brand) and wants to link them up under its operating system (self-fuelling more data about individuals to refine the user experience). The battle for the smart home will also be fought between Microsoft, Apple, Samsung and Amazon. Soli may be the smart interface that links not just Google products, but perhaps all these systems together.

Of course, it’s early days. The virtual gestures may be intuitive, but we still have to learn to use them; our virtual language needs to be built up. Previous gesture recognition tech like the IR-driven Kinect and the Wii have proved to be an interesting novelty but clunky in practice. Gesture will work best when combined fluently with voice interaction and dovetailed with augmented reality so that we can view and manipulate text, graphics, even video, virtually.

Just like Minority Report – except without the gloves which Tom Cruise’s PreCrime detective wore.

It couldn’t get everything right.

 

 

 

 

source: https://www.redsharknews.com/technology/item/6724-minority-report-style-interfaces-just-took-a-step-closer-to-reality