US20240364841 - Representing Real-World Objects with a Virtual Reality
EnvironmentAn image processing system can provide a virtual reality (VR) experience to a user wearing a head-mounted display (HMD) and can enable the user to interact with one or more objects in a real-world environment. In one example, the image processing system receives image data (e.g., one or more still frame images and/or video frame images, etc.) of a scene. In some cases, receiving data can include capturing, detecting, acquiring, and/or obtaining data. The scene can be associated with a real-world environment around the user wearing the HMD. The real-world environment can include a real-world object that is captured in the scene. In other words, received image data representing the scene can include image data that represents the real-world object. The real-world object in the captured scene (i.e., in the received image data of the scene) is referred to as a target object. In this example, the user wearing the HMD and experiencing a virtual environment may desire or intend to interact with the target object while continuing to experience the virtual environment while wearing the HMD. The image processing system can detect or identify the target object in the captured scene. After identifying the target object in the captured image, the image processing system can include the target object within the virtual environment that the user is experiencing via the HMD. A generated scene including the virtual environment and a rendering (i.e., a rendered/generated representation) of the target object is referred to as a combined scene. The image processing system can present the combined scene to the user via the HMD.
In some embodiments, the image processing system creates the appearance that the target object (e.g., received pixel data representing the target object) “passes through” into the virtual environment provided to the user via the HMD. A user holding a target object, for example, may have the target object represented in the virtual world shown in the HMD at the location of the physical object in the real world. For instance, pixel data received for the target object (e.g., real-world object) can be used to generate pixel data for a representation of the target object rendered in combination with the virtual environment. The pixel data for the representation of the target object can be rendered in a combined scene with the virtual environment. In some cases, the pixel data received for the target object can be modified in order to generate the pixel data for the representation of the target object rendered in combination with the virtual environment. In some cases, the pixel data for the representation of the target object can be generated to be equivalent to the pixel data initially received for the target object.
Moreover, in some implementations, the image processing system can cause the target object to appear to be overlaid on the virtual environment experienced by the user wearing the HMD. In some implementations, while rendering the target object with the virtual environment, the image processing system can apply a graphical overlay, such as a skin, to the target object. The graphical overlay can, as used herein, refer to a visual effect that the image processing system applies in association with rendering a representation of the real-world object. In some cases, the graphical overlay (e.g., skin) can be applied in attempt to assist the user to track the target object in the virtual environment, and/or to allow the target object to more appropriately fit the virtual environment in a graphical sense (e.g., to visually fit a theme of the virtual environment).
- DGIST Professor Youngu Lee and Jeonbuk National University Professor Jaehyuk Lim successfully developed an ultra-sensitive, transparent, and flexible electronic skin mimicking the neural network in the human brain. - Applicable across different areas, including healthcare wearable devices and transparent display touch panels.
An antenna of the radar sub-system may be positioned downward from a head of a user and at a predefined angle to detect the gesture while the user is in a natural pose.
1. An augmented reality system for a pilot in a cockpit of an aircraft, said aircraft having a geospatial location, altitude and attitude at a given moment, said system comprising:
a display for displaying a visual environment outside of the aircraft augmented with virtual content;
a computer content presentation system for generating virtual content based on an object not in said visual environment, and causing a representation of said virtual content to be displayed on said display;
wherein said virtual content comprises at least geospatial location of said object; wherein said representation of said virtual content is displayed on said display relative to at least said aircraft's location, altitude, and attitude at said given moment; and
wherein said object is based on third party data.
2. The augmented reality system of claim 1, wherein said third party data comprises at least one of automatic dependent surveillance-broadcast (ADS-B), airborne warning and control system (AW ACS) data, map/terrain data, weather data, taxiing data, jamming signal map/data, electromagnetic map data, or intelligence data.
3. The augmented reality system of claim 1, wherein said virtual contact comprises said speed of said object, and orientation of said object in said geospatial location
4. The system of claim 1, wherein said display comprises at least one of a head-mounted display (HMD), eyeglasses, Head-Up Display (HUD), smart contact lenses, a virtual retinal display, an eye tap, a Primary Flight Display (PFD) and a cockpit glass.
5. The augmented reality system of claim 4, wherein said display is a see-through display.
6. The augmented reality system of claim 5, wherein said system comprises a helmet worn by said pilot, said helmet comprising said display.
7. The augmented reality system of claim 6, further comprising a helmet position sensor system configured to determine a location and orientation of said helmet within said cockpit.
8. The augmented reality system of claim 7, wherein said representation is displayed on said display relative also to said location and orientation of said helmet in said cockpit.
9. A method of enhancing a view of a pilot using augmented reality, said pilot being in a cockpit of an aircraft, said aircraft having a geospatial location, altitude and attitude at a given moment, said method comprising:
displaying a visual environment outside of the aircraft augmented with virtual content;
generating said virtual content based on an object not in said visual environment, and causing a representation of said virtual content to be displayed on said display; wherein said virtual content comprises at least geospatial location of said object; wherein said representation of said virtual content is displayed on said display relative to at least said aircraft's location, altitude, and attitude at said given moment; and
wherein said object is based on third party data.
10. The method of claim 9, wherein said third party data comprises at least one of automatic dependent surveillance-broadcast (ADS-B), airborne warning and control system (AW ACS) data, map/terrain data, weather data, taxiing data, jamming signal map/data, electromagnetic map data, or intelligence data.
11. The method of claim 9, wherein said virtual contact comprises said speed of said object, and orientation of said object in said geospatial location.
12. The method of claim 9, wherein said display comprises at least one of a headmounted display (HMD), eyeglasses, Head-Up Display (HUD), smart contact lenses, a virtual retinal display, an eye tap, a Primary Flight Display (PFD) and a cockpit glass.
13. An augmented reality system for a pilot in a cockpit of an aircraft, said aircraft having a geospatial location, altitude and attitude and a given moment, said system comprising: a display for displaying a visual environment outside of the aircraft augmented with virtual content;
a computer content presentation system for generating virtual content based on an object not in said visual environment, and causing a representation of said virtual content to be displayed on said display;
wherein said virtual content comprises at least geospatial location of said object; wherein said representation of said virtual content is displayed on said display relative to at least said aircraft's location, altitude, and attitude at said given moment; wherein said object is a virtual landing platform.
14. The augmented reality system of claim 13, wherein said virtual landing platform is a virtual aircraft carrier landing deck.
15. The augmented reality system of claim 14, wherein said virtual contact comprises said speed of said object, and orientation of said object in said geospatial location
16. The augmented reality system of claim 14, wherein said representation of said virtual content is delimited to a region around said object so as to leave a portion of said visual environment unobscured by said virtual content.
17. The augmented reality system of claim 13, further comprising assessing the pilot’s performance landing on said virtual landing platform based on information related to a calculated intersection of said airplane and said virtual landing platform.
18. A method of training a pilot using augmented reality, said pilot being in a cockpit of an aircraft, said aircraft having a geospatial location, altitude and attitude at a given moment, said method comprising:
displaying a visual environment outside of the aircraft augmented with virtual
content;
generating said virtual content based on an object not in said visual environment, and causing a representation of said virtual content to be displayed on said display; wherein said virtual content comprises at least geospatial location of said object; wherein said representation of said virtual content is displayed on said display relative to at least said aircraft's location, altitude, and attitude at said given moment; and
wherein said object is a virtual landing platform.
19. The method of claim 18, wherein said landing platform is an aircraft landing deck.
20. The method of claim 19, wherein said virtual contact comprises said speed of said object, and orientation of said object in said geospatial location.
21. The method of claim 18, further comprising: assessing the pilot’s performance landing on said virtual landing platform based on information related to a calculated intersection of said airplane and said virtual landing portion."
The present disclosure relates to a head-mounted display apparatus, which comprises: a head-mounted display portion, and a control portion which is in signal connection with the head-mounted display portion; the control portion comprises a main battery, an auxiliary battery, and a switching circuit module which is connected to the main battery and to the auxiliary battery, the switching circuit module being used for power supply switching between the main battery and the auxiliary battery; the switching circuit module comprises a first triggering connecting piece and a second triggering connecting piece which have a first state and a second state; in the first state, the first triggering connecting piece and the second triggering connecting piece are in a non-conducting state, and the main battery supplies power; and in the second state, the first trigger connecting piece and the second trigger connecting piece are in a conducting state, so as to trigger a switching signal and switch to auxiliary battery power supply. The present disclosure further relates to a communication command system.
At Somnia, we are thrilled to announce our strategic partnership with Unstoppable Domains, a leading provider of blockchain-based domain names. Together, we are making decentralized digital identities more accessible and affordable for our community, with the official launch of the .Dream domain now available in Somnia’s ecosystem. This collaboration is designed to simplify the acquisition […]
The rover drives to the samples with an accuracy of 10cm, constantly mapping the terrain. Codi uses its arm and four cameras to locate the sample tube, retrieve it and safely store it on the rover – all of it without human intervention.
US20230326092 Head mounted displays (HMDs) are used, for example, in the field of virtual environments (e.g., virtual reality, augmented reality, the metaverse, or other visual representation of an environment based upon data and which a user can interact). In such virtual environments, human users may wear HMDs and engage with others in the virtual environment, even though the human users may be physically located remotely from others. In such an environment, a common use case is one where a virtual meeting is taking place (e.g., an office meeting, a class meeting, etc.). Such a virtual meeting may include, for example, a plurality of audience members wearing a respective plurality of HMDs, and a speaker who is speaking to the audience members or alternatively is presenting information to the audience members. However, present virtual environment systems with HMDs do not provide speakers with highly accurate real time visual cuesabout audience attention or feelings. Claims:
In one implementation, the at least one visual indicator is an emoji.
In one implementation, the displaying of the at least one visual indicator may be optionally enabled or disabled.
In one implementation, the movement data includes head movement data and hand movement data.
In one implementation, the translated reactions are emotions.
In one implementation, the method further comprises evaluating the training results using the set of rules which have been trained by the training data to translate movement data from a HMD into visual indicators and comparing the visual indicators which have been translated from the movement data against the visual indicators which have been translated from the recorded reactions in the training data.
"The SpaceX-backed mission has no professional astronauts aboard.
MISSION OBJECTIVES
HIGH ALTITUDE
This Dragon mission will take advantage of Falcon 9 and Dragon’s maximum performance, flying higher than any Dragon mission to date and endeavoring to reach the highest Earth orbit ever flown. Orbiting through portions of the Van Allen radiation belt, Polaris Dawn will conduct research with the aim of better understanding the effects of spaceflight and space radiation on human health.
First Commercial Spacewalk
At approximately 700 kilometers above the Earth, the crew will attempt the first-ever commercial extravehicular activity (EVA) with SpaceX-designed extravehicular activity (EVA) spacesuits, upgraded from the current intravehicular (IVA) suit. Building a base on the Moon and a city on Mars will require thousands of spacesuits; the development of this suit and the execution of the EVA will be important steps toward a scalable design for spacesuits on future long-duration missions.
In-Space Communications
The Polaris Dawn crew will be the first crew to test Starlink laser-based communications in space, providing valuable data for future space communications systems necessary for missions to the Moon, Mars and beyond.
Health Impact Research
While in orbit, the crew will conduct scientific research designed to advance both human health on Earth and our understanding of human health during future long-duration spaceflights. This includes, but is not limited to:
Using ultrasound to monitor, detect, and quantify venous gas emboli (VGE), contributing to studies on human prevalence to decompression sickness;
Gathering data on the radiation environment to better understand how space radiation affects human biological systems;
Providing biological samples towards multi-omics analyses for a long-term Biobank; and
Research related to Spaceflight Associated Neuro-Ocular Syndrome (SANS), which is a key risk to human health in long-duration spaceflight.
SpaceX and Polaris Dawn will also collaborate with the Translational Research Institute for Space Health (TRISH), BioServe Space Technologies at the University of Colorado Boulder, Space Technologies Lab at Embry Riddle Aeronautical University, Weill Cornell Medicine, Johns Hopkins University Applied Physics Laboratory, the Pacific Northwest National Laboratory, and the U.S. Air Force Academy.
US20240355148LIGHT EMITTER ARRAY AND BEAM SHAPING ELEMENTS FOR EYE TRACKING WITH USER AUTHENTICATION AND LIVENESS DETECTION
an eye tracking system includes a light source, an image sensor, and a controller which controls the image sensor to capture series of images of light reflections from the eye when the eye is stationary, determines blood flow characteristics using pattern changes in the captured series of images, and performs user authentication and/or liveness detection based on the detected blood flow characteristics
A short trailer video for a project I’ve been working on at FinalSpark to demonstrate the capabilities of Neuroplatform, the world’s first wetware computing cloud platform. We basically created a mini proof of concept of ‘The Matrix’, as in ‘embedding human brains in a virtual world’ by transmitting sensory input to and from the brain organoid to let it interact with it via the internet.
• To read a short essay for more information, click here: https://danbur.online/EzaqCK6 • For more updates, follow me on social media: @danburonline
--------------------------------
Video chapters: 0:00 Introduction 0:19 How it works 0:32 About the brain organoid 0:48 About the simulated world 1:04 Just a URL away
--------------------------------
Video summary: This video introduces FinalSpark’s groundbreaking Neuroplatform demo, showcasing the world’s first human #brainorganoid embodied in a virtual environment via the internet. The project features a virtual butterfly controlled by a lab-grown mini-brain consisting of approximately 10,000 neurons. This mini-brain, connected to electrodes and neurochemical interfaces, processes sensory input from the virtual world and makes autonomous decisions to control the butterfly’s movements in real time.
The demo represents a significant milestone in #wetware computing and brain-computer interfaces. It allows users to interact with a 3D #virtualreality environment controlled by actual human neurons, accessible 24/7 through a web browser. The brain organoids, derived from induced pluripotent stem cells, can maintain strong neuronal activity for over 100 days.
FinalSpark’s proprietary multi-electrode and microfluidic systems enable extended stimulation of these brain organoids, a capability unique to their platform. This proof-of-concept demonstrates the potential of biological neural networks in computing, offering millions of times more energy efficiency than silicon-based systems and unparalleled learning abilities.
The project not only showcases the current capabilities of wetware computing but also points towards future applications in robotics, autonomous systems, and even the possibility of more complex brain-virtual world interactions. It represents a significant step towards the realisation of concepts previously confined to science fiction, such as “The Matrix,” and opens up new avenues for research in cognitive preservation and mind uploading.
Projekt ten, może być pionierem dla przyszłych wcieleń technologii vr czy neurolinków. Mózg owego robaka przenosi informacje do programu w czasie rzeczywistym.
Motion data from a head mounted display (HMD) is translated using a set of rules into reactions which are represented by visual indicators and displayed on a display. The accuracy of the translation is improved by using training data applied to the set of rules and collected according to a training data collection process where human observers are observing humans who are wearing HMDs and recording observations.
/PRNewswire/ - Innovative Eyewear (NASDAQ: LUCY; LUCYW), the developer of ChatGPT-enabled smart eyewear under the Lucyd®, Nautica®, Eddie Bauer®, and Reebok® brands, today announced the launch of the first generative AI fashion show for eyewear. In this striking digital performance, AI-generated models flaunt the latest real smart eyewear collections from Lucyd, including its collaborative collections with Reebok, Nautica and Eddie Bauer."
In virtual environments such as, for example, the metaverse, users can represent themselves as avatars. A user may use head-mounted displays (HMDs) as portals to the virtual environment. HMDs include virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) devices including headsets, goggles, glasses, etc. With HMDs, the location of the eyes of the user wearing the HMD relative to cameras and other sensors in the HMD is predictable. Thus, the user's avatar appears naturally oriented in the virtual environment.
[0016] In addition to HMDs, other electronic devices may be used as portals for virtual environments. For example, a personal computer (PC) may be used as a metaverse portal, alternatively or in addition to HMDs. With some PC setups, such as for example laptop computers with external
d isplays and/or multi-display desktop computers, the location of the camera and/or other sensors may be offset from the display of the virtual environment. For example, FIG. 1 shows an example laptop computer 102 with an example camera 104 positioned adjacent to an example external display 106. An example virtual environment 108 is presented on the external display 106. An example avatar 110 of an example user 112 is presented in the virtual environment 108. The position or orientation of the avatar 110 is determined based on data gathered by the camera 104.
ropbox
[...]
"Also, in virtual environments, avatars can look at each other and even make virtual eye contact, so it feels to the user that other users' avatars are making eye contact. Virtual eye contact is inconsistent or prevented when an avatar is mis-oriented."
US20240305330 - LINE-OF-SIGHT (LoS) COMMUNICATION CAPABILITY FOR A NEAR-EYE DISPLAY DEVICE
to establish discrete and low power consuming “walkie talkie” style communications when the users are within range and within an angle-of-arrival (AoA) of each other. For example, the angle-of-arrival (AoA) may be in a range from about 10 degrees to about 30 degrees (or more) letting the users communicate when they face each other. A line-of-sight (LOS) application executed on the near-eye display devices may be arranged to alert the users when line-of-sight (LoS) communication can be established. Thus, instead of going through a number of steps and establishing communication through an online communication application (and communicating over one or more networks such as the Internet), which may be cumbersome and power consuming, a user may simply accept or trigger the line-of-sight (LoS) communication when the other user is in range and within the angle-of-arrival (AoA).
The line-of-sight (LoS) communication session(s) may be facilitated by (typically short-range) personal area communication systems such as ultra-wide band (UWB) communication, Bluetooth Low Energy (BLE) communication or similar ones.
Thus, power consumption of the near-eye display device may be substantially reduced. Furthermore, two users in a crowded environment (e.g., a conference center) may communicate discretely. The communication modes (i.e., audio, audio/video) may be selected by the user(s).
Moreover, a near-eye display device may be equipped with multiple wireless communication systems and a suitable one may be selected (and switched to) based on available power, a noise environment, a communication mode, or similar factors.
As mentioned herein, a line-of-sight (LOS) application may detect another user within range and angle-of-arrival (AoA) and present an OPTION to initiate a communication session to the user (e.g., an icon, text, or other visual cue, and/or an audio alert). The user may make their selection (starting the communication session, ending the communication session, switching communication modes, etc.) through any input such as touch (a sensor on the near-eye display device), gesture, body movement, audio command, eye movement, etc.).
Welcome to our latest video, where we present an exciting new development in the field of research technology: the seamless synchronization between Cortivision's fNIRS and Tobii's eye tracker. This integration opens up new possibilities for researchers by combining functional near-infrared spectroscopy (fNIRS) with precise eye-tracking data.
In this video, we will showcase how this powerful combination can be utilized to gain deeper insights into cognitive processes and visual attention. We will demonstrate the setup process, provide examples of potential research applications, and highlight the benefits of using these synchronized technologies in your studies.
Whether you are involved in neuroscience, psychology, or human-computer interaction research, this cutting-edge integration offers unparalleled data richness and accuracy. Join us to explore how Cortivision and Tobii are advancing research capabilities and driving innovation.
Don't forget to like, comment, and subscribe for more updates on the latest research tools and technologies!
"Phygital Labs has launched Galerio, the first spatial cultural experience featuring Vietnam's Last Royal Dynasty, the Nguyen Dynasty. This immersive cultural experience is now accessible globally at galerio.io and supports Vision Pro and Quest. "
Do not miss the video proposed by PR Neswire in the article, introducing
the engaging immersive cultural experience, supported by Vis:ion Pro and Quest.
Once again a fundamental message emerges: one may have the best design ever, but it is the quality and the culture beyond that brings it to takeoff and fly.
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.