1 hour ago — Sunday Times News: The famous statement that all disease is cellular disease is masterfully unpacked in Siddhartha Mukherjee's latest book ...
by J McCAIN · 2005 · Cited by 20 — “Gene therapy will become a component of 21st century medicine. There's no reason it can't work. But huge questions remain to be resolved. The history of ...
by WH Ettinger · 2011 · Cited by 5 — The full realization of the impact of gene and cell therapies on human diseases will take place in a health-care payment environment that is not particularly ...
by HD are Made — Gene therapy is a new generation of medicine where a functioning gene is delivered to a targeted tissue in the body to produce a missing or nonfunctioning ...
May 17, 2021 – Viral-vector gene therapies show great promise, but the full extent of their clinical impact in the long term is not yet certain. Success depends ...
28-Feb-2022 — Gene therapy is a medical approach that treats or prevents disease by correcting the underlying genetic problem instead of using drugs or ...
09-Dec-2022 — Scientists are on the cusp of huge breakthroughs in a new field of medicine that would create a new paradigm for healthcare – one that could ...
14-Dec-2022 — But they cautioned that it should remain off-limits until the field gained a firmer grasp of genetic processes in cells, their relationship to ...
In 2019, Edward Chang, a neurosurgeon at the University of California, San Francisco, opened the skull of a 36-year-old man, nicknamed “Pancho,” and placed a thin sheet of electrodes on the surface of his brain.1
The electrodes gather electrical signals from the motor neurons that
control the movement of the mouth, larynx, and other body parts to
produce speech. A small port, implanted on top of Pancho’s head, relayed
the brain signals to a computer. This “brain-computer interface,” or
BCI, solved an intractable medical problem.
In 2003, Pancho, a field worker in California’s vineyards, was
involved in a car crash. Days after undergoing surgery, he suffered a
brainstem stroke, reported the New York Times Magazine.2
The stroke robbed Poncho of the power of speech. He could communicate
only by laboriously spelling out words one letter at a time with a
pointing device. After training with the computer outfitted with
deep-learning algorithms that interpreted his brain activity, Pancho
could think the words that he wanted to say, and they would appear on
the computer screen. Scientists called the results “groundbreaking”;
Pancho called them “life-changing.”
The path from helping stroke victims to giving people superpowers is neither direct nor inevitable.
The clinical success of BCIs (there are other stories
to go along with Pancho’s) appear to vindicate the futurists who claim
that BCIs may soon enhance the brains of healthy people. Most famously,
Ray Kurzweil, author of The Singularity Is Near, has asserted
that exponentially rapid developments in neuroscience, bioscience,
nanotechnology, and computation will coalesce and allow us to transcend
the limitations of our bodies and brains. A major part of this huge
shift will be the rise of artificial intelligences that are far more
capable than human brains. It is an inevitability of human evolution,
Kurzweil thinks, that the two kinds of intelligence will merge to form
powerful hybrid brains, which will define the future of humanity. This,
he predicted, would happen by 2045.
While futuristic scenarios like Kurzweil’s are exciting to ponder,
they are brought back down to Earth by the technological capabilities of
brain-computer hybrids as they exist today. BCIs are impressive, but
the path from helping stroke victims to giving people superpowers is
neither direct nor inevitable.
One of the first great steps in BCIs came in 1998, when neuroscientist Phil Kennedy
inserted a single electrode into the brain of Johnny Ray, a paralyzed
stroke survivor, and produced the first example of human mind control of
an external device via an implant. This enabled the “locked-in” Ray to
communicate by mentally moving a cursor to select letters on a computer
screen and earned Kennedy international acclaim.
Implanted BCIs can also work oppositely, directing external
electrical signals to trigger specific neurons. In 2021, a team at the
University of Pittsburgh put electrodes
into the motor cortex of a paralyzed man to allow him to control a
robotic arm, and into his somatosensory cortex, where incoming sensory
impulses activate neurons.3 As he grasped an object with the
arm, he felt that he was contacting and holding the object through
signals sent by sensors in the robotic hand. This substantially improved
control of the artificial limb.
In another example, Columbia University biomedical engineer Ken Shepard has used advanced nanotechnology to construct a tiny chip a half-inch square with 65,000 microelectrodes.4
The idea is to place the chip on the surface of the brain’s visual
cortex and wirelessly send in data from a camera to restore sight to the
blind. If this device passes human trials, it will represent a big
advance over an earlier effort with fewer electrodes, which limited the
quality of the image a camera could send to the brain.
THE SINGULARITY IS NOT NEAR:
We are still a long way, “decades to centuries,” says Princeton
University neuroscientist Michael Graziano, from augmenting the whole
brain, or achieving that science-fiction dream of uploading its contents
to a computer. Image by lassedesignen / Shutterstock.
Along with triggering sensory responses, electrical or other input to
the brain can alter its functions in a process known as
neuromodulation. In deep brain stimulation (DBS), a small “brain
pacemaker” is embedded under the skin in the upper chest and sends
electrical impulses to electrodes placed in specific brain regions. DBS
was approved by the FDA to treat Parkinson’s disease and manage
epileptic seizures, and has been used to treat other conditions such as
chronic pain.
Some neuromodulation methods work without invasive surgery. In
transcranial direct current stimulation (tDCS), electrodes placed on the
scalp and connected to a battery produce a weak electric current that
influences brain activity. Any electronics hobbyist can build this
simple device, and commercial models can be found for as little as $125.
tDCS is not FDA-approved and there are concerns about its unregulated
use, but tests show promise to relieve certain conditions and improve
brain function. In 2020 and 2022 the FDA approved full clinical trials
to test the efficacy of tDCS to treat depression.
These examples show how the capability to record and influence brain
activity can benefit body and mind for those who have lost function in
either. The new pathways to the brain also suggest ways to enhance the
bodies of healthy people; for instance, through a neurally controlled
exoskeleton that provides greater-than-human power or speed. But can
these technologies augment human cognition? Can human and machine
intelligences merge into a greater whole?
In 2011, Paul Allen, cofounder of Microsoft and founder of an
institute to study the brain, and AI expert Mark Greaves, declared the singularity was not near,
and called Kurzweil’s prediction of a major realignment in 2045
“far-fetched,” notably because it is unlikely we will understand the
human brain so soon. In 2022, we remain at the beginning of knowing the
brain.
We have, though, made incredible
progress knowing the brain—progress that highlights how much is left to
do. Kurzweil projected that swarms of nanobots would explore the human
brain in unprecedented detail. We’re nowhere near that technology.
Rather, the National Institute of Health’s Brain Initiative has mounted
$500 million to bring together hundreds of scientists to map and catalog
the brain’s 86 billion neurons with existing methods such as staining
them to reveal their shapes. Instead of having software models of human
intelligence as Kurzweil predicted, a €1 billion European project to
simulate the brain on a supercomputer has after 10 years only simulated
the mouse brain, a thousand times smaller.
The state of BCIs today presents another stumbling block in the road
to singularity. Surgically implanted electrodes and non-invasive methods
like tCDS have serious drawbacks. Inserting wires and silicon chips
requires skilled brain surgery and risks infection or collateral damage.
Implants can deteriorate within the brain’s wet environment, and the
recipient is awkwardly tied to a computer by the connecting wires.
Electrodes are implanted only in clinically monitored patients like
Pancho. They are not implanted for human experimentation, nor is their
use in healthy people likely to earn regulatory approval anytime soon.
The technological imperative should not be our sole guide to how humanity can help itself evolve.
Technology companies have announced the invention of improved and less invasive surgically implanted BCIs. Neuralink,
founded by Elon Musk, redolent of his grand ambitions, promises that
its BCIs will help clinicians treat people with paralysis and “could
expand our abilities, our community, and our world.” After several years
of development, though, Neuralink has yet to begin human trials. Synchron,
another start-up, dedicated to the treatment of people with
neurological diseases, has passed human trials abroad and has just
started an FDA-approved trial of its method, which puts electrodes
inside the brain’s natural blood vessels without major surgery. Both
efforts would use Bluetooth technology to eliminate wires from the brain
and increase portability.
The other option is to augment brains with non-invasive methods.
Electroencephalography and tDCS can record and stimulate brains with
electrodes placed on the scalp, and other contactless means use magnetic
fields, light, or ultrasound. They too, however, present problems.
Compared to electrode implants, some non-invasive approaches display
lower spatial resolution and noisier data. And although they offer fewer
risks than brain surgery, their side effects, such as long-term unanticipated changes in brains, need further scrutiny.
A 2019 summary review
by Davide Valeriani at Harvard Medical School, and Caterina Cinel and
Riccardo Poli at the Brain Computer Interfaces and Neural Engineering
Laboratory at the University of Essex in England, looks at the ongoing
research into BCIs designed not only for people with severe disabilities
but for human cognitive augmentation in general.5 The
authors show that researchers and clinicians today can choose from among
10 different methods to record or affect brain activity and enhance it.
One such brain function is perception. Non-invasive BCIs have
improved performance in discriminating among different shapes, tracking
multiple objects, and in a more complex task, viewing a video clip and
determining if a possible threat is present. Decision-making, another
important brain function, draws on several mental abilities and has been
extensively studied. But using non-invasive BCIs to improve
decision-making has been unimpressive; the data they yield is too noisy
unless it is averaged over measurements or from several users.
Augmentation of memory and learning is important as the population
ages, with accompanying memory loss. Studies show that sessions of
non-invasive stimulation can temporarily improve spatial memory and the
working memory that briefly holds information. One set of experiments
gives clues to a memory prosthesis, although it would require invasive
surgery. Researchers at Wake Forest Baptist Medical Center, the
University of Southern California, and elsewhere, showed that electrical
stimulation of electrodes placed in the part of the brain called the
hippocampus enhanced memory in animals, and human subjects, who showed an average improvement of 36 percent in short- and long-term memory.6 This work met ethical standards because the subjects were epileptics who already had implants that controlled their seizures.
Valeriani, Cinel, and Poli predict that by 2040, most forms of
non-invasive brain augmentation will be in field-testing for general
use, or perhaps even as wearable neurotechnology.
These achievements by 2040 would represent astonishing technological
progress but are less grandiose than the vision of human brains merging
with AIs by 2045 to reach unprecedented capability. Instead of an
imagined total meshing of brain and machine, current methods affect only
portions of the brain and enhance only aspects of cognitive ability
such as perception, not the entire brain. We are still a long way from
augmenting the whole brain, or even achieving that science-fiction dream
of uploading its contents to a computer.
In 2019, Princeton University neuroscientist Michael Graziano explained why.7
He believes mind uploading will happen, but only after we simulate the
86 billion neurons in our brains and reproduce how they are connected
through 100 trillion synapses, the “connectome” that shapes whole-brain
functions. “The most wildly optimistic predictions place mind uploading
within a few decades, but I would not be surprised if it took
centuries,” Graziano wrote. Since neuroscience is rapidly developing, I
recently asked Graziano if, three years later, he had seen any progress
that would alter his original assessment. His response: “Decades to
centuries is still my guess.”
Neurotechnology is evolving, but not explosively enough to bring
humanity to a new stage by 2045. Future projections of technology often
depend on two assumptions: the technological imperative—new technology
will always come, and once available, people will develop and exploit it
to the fullest; and exponential growth, exemplified by Moore’s Law,
which states that the number of transistors on a computer chip doubles
roughly every two years.
Neither assumption is inviolate nor appropriate for neurotechnology.
Exponential growth can reach a plateau: We may already be at a limit of
chip technology where Moore’s Law no longer applies. And the
technological imperative should not be our sole guide to how humanity
can help itself evolve beyond its biological heritage. Unlike chip
technology, neurotechnology inherently affects people, from the ill,
injured, and disabled, to citizens who may or may not want their brains
to be accessed. Here the technological imperative needs to be tempered
by an ethical imperative, worked out by society, which would, and
should, slow the evolution of brain and machine until we know it
benefits humanity.
Sidney Perkowitz is the Candler Professor of Physics Emeritus at Emory University. His latest books are Physics: a Very Short Introduction (2019, audiobook forthcoming 2022) and Science Sketches: the Universe from Different Angles (2022).
*Correction: I misspoke when I said "cycles per sound. " It is "cycles per second." The science of Cymatics (the study of visible ...
CC
Your
Youniverse | Pure tone of math fundamental to nature | Mathematically
consistent with the patterns of the Universe. | 432 Hertz pops out as a
triangle, everytime we image it. | The brain and the earth itself- work
on the same frequencies.
A
doctor examines a patient’s lungs using a computed tomography scan in
Moscow, Russia. BU researcher Bela Suki says that many patients, despite
not showing signs of lung abnormalities during a scan, suffer from
dangerously low oxygen levels, a condition known as silent hypoxia.
Credit: Sputnik via AP
COVID-19 & Low Blood O2
Three Reasons Why COVID-19 Can Cause Silent Hypoxia
BU biomedical engineers used computer modeling to investigate why blood oxygen drops so low in many COVID-19 patients
More than six months since COVID-19 began spreading in the United
States, scientists are still solving the many puzzling aspects of how
the novel coronavirus attacks the lungs and other parts of the body. One
of the biggest and most life-threatening mysteries is how the virus
causes “silent hypoxia,” a condition when oxygen levels in the body are
abnormally low, which can irreparably damage vital organs if gone
undetected for too long. Now, thanks to computer models and comparisons
with real patient data, Boston University biomedical engineers and
collaborators from the University of Vermont have begun to crack the
mystery.
Despite experiencing dangerously low levels of oxygen, many people
infected with severe cases of COVID-19 sometimes show no symptoms of
shortness of breath or difficulty breathing. Hypoxia’s ability to
quietly inflict damage is why it’s been coined “silent.” In coronavirus
patients, it’s thought that the infection first damages the lungs,
rendering parts of them incapable of functioning properly. Those tissues
lose oxygen and stop working, no longer infusing the blood stream with
oxygen, causing silent hypoxia. But exactly how that domino effect
occurs has not been clear until now.
“We didn’t know [how this] was physiologically possible,” says Bela Suki,
a BU College of Engineering professor of biomedical engineering and of
materials science and engineering and one of the authors of the study.
Some coronavirus patients have experienced what some experts have described
as levels of blood oxygen that are “incompatible with life.”
Disturbingly, Suki says, many of these patients showed little to no
signs of abnormalities when they underwent lung scans.
To help get to the bottom of what causes silent hypoxia, BU
biomedical engineers used computer modeling to test out three different
scenarios that help explain how and why the lungs stop providing oxygen
to the bloodstream. Their research, which has been published in Nature Communications,
reveals that silent hypoxia is likely caused by a combination of
biological mechanisms that may occur simultaneously in the lungs of
COVID-19 patients, according to biomedical engineer Jacob Herrmann, a research postdoctoral associate in Suki’s lab and the lead author of the new study.
Normally, the lungs perform the life-sustaining duty of gas exchange, providing oxygen to every cell in the body
as we breathe in and ridding us of carbon dioxide each time we exhale.
Healthy lungs keep the blood oxygenated at a level between 95 and 100
percent—if it dips below 92 percent, it’s a cause for concern and a
doctor might decide to intervene with supplemental oxygen. (Early in the
coronavirus pandemic, when clinicians first started sounding the alarm
about silent hypoxia, oximeters flew off store shelves
as many people, worried that they or their family members might have to
recover from milder cases of coronavirus at home, wanted to be able to
monitor their blood oxygen levels.)
The researchers first looked at how COVID-19 impacts the lungs’
ability to regulate where blood is directed. Normally, if areas of the
lung aren’t gathering much oxygen due to damage from infection, the
blood vessels will constrict in those areas. This is actually a good
thing that our lungs have evolved to do, because it forces blood to
instead flow through lung tissue replete with oxygen, which is then
circulated throughout the rest of the body.
But according to Herrmann, preliminary clinical data have suggested
that the lungs of some COVID-19 patients had lost the ability of
restricting blood flow to already damaged tissue, and in contrast, were
potentially opening up those blood vessels even more—something that is
hard to see or measure on a CT scan.
Using a computational lung model, Herrmann, Suki, and their team
tested that theory, revealing that for blood oxygen levels to drop to
the levels observed in COVID-19 patients, blood flow would indeed have
to be much higher than normal in areas of the lungs that can no longer
gather oxygen—contributing to low levels of oxygen throughout the entire
body, they say.
Next, they looked at how blood clotting may impact blood flow in
different regions of the lung. When the lining of blood vessels get
inflamed from COVID-19 infection, tiny blood clots too small to be seen
on medical scans can form inside the lungs. They found, using computer
modeling of the lungs, that this could incite silent hypoxia, but alone
it is likely not enough to cause oxygen levels to drop as low as the
levels seen in patient data.
Last, the researchers used their computer model to find out if
COVID-19 interferes with the normal ratio of air-to-blood flow that the
lungs need to function normally. This type of mismatched air-to-blood
flow ratio is something that happens in many respiratory illnesses, such
as with asthma patients, Suki says, and it can be a possible
contributor to the severe, silent hypoxia that has been observed in
COVID-19 patients. Their models suggests that for this to be a cause of
silent hypoxia, the mismatch must be happening in parts of the lung that
don’t appear injured or abnormal on lung scans.
Altogether, their findings suggest that a combination of all three
factors are likely to be responsible for the severe cases of low oxygen
in some COVID-19 patients. By having a better understanding of these
underlying mechanisms, and how the combinations could vary from patient
to patient, clinicians can make more informed choices about treating
patients using measures like ventilation and supplemental oxygen. A
number of interventions are currently being studied, including a
low-tech intervention called prone positioning
that flips patients over onto their stomachs, allowing for the back
part of the lungs to pull in more oxygen and evening out the mismatched
air-to-blood ratio.
“Different people respond to this virus so differently,” says Suki.
For clinicians, he says it’s critical to understand all the possible
reasons why a patient’s blood oxygen might be low, so that they can
decide on the proper form of treatment, including medications that could
help constrict blood vessels, bust blood clots, or correct a mismatched
air-to-blood flow ratio.
This research is supported by the National Heart, Lung, and Blood Institute.