healthcare technology
147.6K views | +43 today
Follow
healthcare technology
The ways in which technology benefits healthcare
Curated by nrip
Your new post is loading...
Your new post is loading...
Scooped by nrip
Scoop.it!

Understanding the differences between biological and computer vision

Understanding the differences between biological and computer vision | healthcare technology | Scoop.it

Since the early years of artificial intelligence, scientists have dreamed of creating computers that can “see” the world. As vision plays a key role in many things we do every day, cracking the code of computer vision seemed to be one of the major steps toward developing artificial general intelligence.

 

But like many other goals in AI, computer vision has proven to be easier said than done. In the past decades, advances in machine learning and neuroscience have helped make great strides in computer vision. But we still have a long way to go before we can build AI systems that see the world as we do.

 

Biological and Computer Vision, a book by Harvard Medical University Professor Gabriel Kreiman, provides an accessible account of how humans and animals process visual data and how far we’ve come toward replicating these functions in computers.

 

Kreiman’s book helps understand the differences between biological and computer vision. The book details how billions of years of evolution have equipped us with a complicated visual processing system, and how studying it has helped inspire better computer vision algorithms.

 

Kreiman also discusses what separates contemporary computer vision systems from their biological counterpart.

Hardware differences

Biological vision is the product of millions of years of evolution. There is no reason to reinvent the wheel when developing computational models. We can learn from how biology solves vision problems and use the solutions as inspiration to build better algorithms.

 

Before being able to digitize vision, scientists had to overcome the huge hardware gap between biological and computer vision. Biological vision runs on an interconnected network of cortical cells and organic neurons. Computer vision, on the other hand, runs on electronic chips composed of transistors

 

Architecture differences

There’s a mismatch between the high-level architecture of artificial neural networks and what we know about the mammal visual cortex.

Goal differences

Several studies have shown that our visual system can dynamically tune its sensitivities to the common. Creating computer vision systems that have this kind of flexibility remains a major challenge, however.

Current computer vision systems are designed to accomplish a single task.

Integration differences

In humans and animals, vision is closely related to smell, touch, and hearing senses. The visual, auditory, somatosensory, and olfactory cortices interact and pick up cues from each other to adjust their inferences of the world. In AI systems, on the other hand, each of these things exists separately.

 

read more at https://venturebeat.com/2021/05/15/understanding-the-differences-between-biological-and-computer-vision/

 

 

Karolina Belter's curator insight, May 23, 2022 3:22 AM
Widzenie biologiczne/komputerowe podobieństwa i różnice.
Scooped by nrip
Scoop.it!

BrainNet, an interface to communicate between human brains, could soon make Telepathy real

BrainNet, an interface to communicate between human brains, could soon make Telepathy real | healthcare technology | Scoop.it

BrainNet provides the first multi-person brain-to-brain interface which allows a nonthreatening direct collaboration between human brains. It can help small teams collaborate to solve a range of tasks using direct brain-to-brain communication.

How does BrainNet operate?

The noninvasive interface combines electroencephalography (EEG) to record brain signals and transcranial magnetic stimulation (TMS) to deliver the required information to the brain. F

 

For now, the interface allows three human subjects to collaborate, handle and solve a task using direct brain-to-brain communication.

 

Two out of three human subjects are “Senders”.

 

The senders’ brain signals are decoded using real-time EEG data analysis. This technique allows extracting decisions which are vital in communicating in order to solve the required challenges.

 

Let’s take an example of a Tetris-like game–where you need quick decisions to decide whether to rotate a block or drop as it is in order to fill a line.

 

The senders’ signals (decisions) are transmitted to the third subject human brain via the Internet, the “Receiver” in this case.

 

The decisions are sent to the receiver brain via magnetic stimulation of the occipital cortex.

 

The receiver can’t see the game screen to decide if the rotation of the block is required.

 

The receiver integrates the decisions received and makes an informed call using an EEG interface regarding turning the position of the block or keeping it in the same position.

 

The second round of the game allows the senders to validate the previous move and provide the necessary feedback to the receiver’s action.

 

more at https://hub.packtpub.com/brainnet-an-interface-to-communicate-between-human-brains-could-soon-make-telepathy-real/

 

No comment yet.
Scooped by nrip
Scoop.it!

Where are memories of familiar places are stored in the brain?

Where are memories of familiar places are stored in the brain? | healthcare technology | Scoop.it

As we move through the world, what we see is seamlessly integrated with our memory of the broader spatial environment.

 

How does the brain accomplish this feat? A new study from Dartmouth College reveals that three regions of the brain in the posterior cerebral cortex, which the researchers call "place-memory areas," form a link between the brain's perceptual and memory systems. The findings are published in Nature Communications.

 

For the study, an innovative methodology was employed. Participants were asked to perceive and recall places that they had been to in the real world during functional magnetic resonance imaging (fMRI), which produced high-resolution, subject specific maps of brain activity. Past studies on scene perception and memory have often used stimuli that participants knew of but had never visited, like famous landmarks, and have pooled data across many subjects. By mapping the brain activity of individual participants using real-world places that they had been to, researchers were able to untangle the brain's fine-grained organization.

 

In one experiment, 14 participants provided a list of people that they knew personally and places that they have visited in real-life (e.g., their father or their childhood home). Then, while in the fMRI scanner, the participants imagined that they were seeing those people or visiting those places. Comparing the brain activity between people and places revealed the place-memory areas. Importantly, when the researchers compared these newly identified regions to the brain areas that process visual scenes, the new regions were overlapping but distinct.

 

 "Learning how the mind is organized is at the heart of the quest of understanding what makes us human.

 

The place-memory network provides a new framework for understanding the neural processes that drive memory-guided visual behaviors, including navigation," explains Robertson.

 

The research team is currently using virtual reality technology to explore how representations in the place-memory areas evolve as people become more familiar with new environments.

 

read the original unedited article at https://medicalxpress.com/news/2021-05-reveals-memories-familiar-brain.html

 

 

read the study paper "A network linking scene perception and spatial memory systems in posterior cerebral cortex"  at http://dx.doi.org/10.1038/s41467-021-22848-z

 

 

Nassima Chraibi's curator insight, January 9, 2023 12:30 PM
This article deals with a study describing the "place memory zones", highlighted by an MRI during a recollection of specific places. A map of the brain's functioning areas was established, with an evolution of stimulation according to the level of familiarity with the place.
Scooped by nrip
Scoop.it!

Can Computing Keep up With the Neuroscience Data Deluge?

Can Computing Keep up With the Neuroscience Data Deluge? | healthcare technology | Scoop.it

When an imaging run generates 1 terabyte of data, analysis becomes the problem


Today's neuroscientists have some magnificent tools at their disposal. They can, for example, examine the entire brain of a live zebrafish larva and record the activation patterns of nearly all of its 100,000 neurons in a process that takes only 1.5 seconds.


The only problem: One such imaging run yields about 1 terabyte of data, making analysis the real bottleneck as researchers seek to understand the brain.


To address this issue, scientists at Janelia Farm Research Campus have come up with a set of analytical tools designed for neuroscience and built on a distributed computing platform called Apache Spark. In their paper in Nature Methods, they demonstrate their system's capabilities by making sense of several enormous data sets. (The image above shows the whole-brain neural activity of a zebrafish larva when it was exposed to a moving visual stimulus; the different colors indicate which neurons activated in response to a movement to the left or right.)


The researchers argue that the Apache Spark platform offers an improvement over a more popular distributed computing model known as Hadoop MapReduce, which was originally based on Google's search engine technology. 


The researchers have made their library of analytic tools, which they call Thunder, available to the neuroscience community at large. With U.S. government money pouring into neuroscience research for the new BRAIN Initiative, which emphasizes recording from the brain in unprecedented detail, this computing advance comes just in the nick of time. 


more at http://spectrum.ieee.org/tech-talk/biomedical/imaging/can-computing-keep-up-with-the-neuroscience-data-deluge/



No comment yet.