What Silicon Valley needs to know about the brain, and what it doesn't

Written by Marom Bikson, Ph.D. | @MaromBikson
Marom Bikson is the Shames Professor of Biomedical Engineering at The City College of New York.

Share |




Part 1: Read / Write Brain

Silicon Valley wants to get inside your head - literally. While these projects adopt mixes of commercial stealth and elaborate publicity, and the breadth of applications spans the entire human condition, all these technologies take one of two product forms: Either as a chip implanted in your head transmitting to a computer or as a wearable cap transmitting to a computer.

The goals of Silicon Valley for tapping into brains seem not new. And the basic ideas and approaches have been well trodden (for long decades) by biomedical scientists and engineers. And so, the central question I want to address is: Can Silicon Valley crack so-far intractable technical challenges, and deliver products with dazzling performance? In part 1, I explain what brain-interfacing technology is supposed to do and show that, at least in principle, Silicon Valley is adopting old approaches from academia.

In disclosure, my lab also works on brain interfaces and we’ve collaborated with some of the biggest companies in the medical device field and in Silicon Valley. But my analysis here is based only on publicly available information. In fact, the commonality of ideas for how to interface with the brain is the point of this section.

In general, what’s to be gained for connecting technology directly to the brain? First, these technologies want to know what you’re thinking. “Reading the brain” takes many forms including diagnostic medical imaging used to image inside the head (like hospital MRI), or Brain-Computer-Interfaces (BCI) used to interpret thought into machine actions. BCI detects brain signals (thoughts) and then translates this into a message that can be carried out by a device – such as instructions to move a robotic arm, or to type a word. Different types of sensors are used, but the most common are electrodes – electrodes translate brain-type electrical current flow which is carried by ions into compute-type electrical flow carried by electrons. Other technologies bounce light through the brain in order to measure it. That brain activity and a person’s mental intention can be measured is well known – but so far efforts to leverage this data have been modest in outcomes, complex in fine-tuning, and short lived in application.

Second, these technologies want to change how you think or how you feel (see my TEDx talk); not indirectly like the warm feeling you get when winning a video game, but directly by changing your brain. In biomedical science, this “writing the brain” is called neuromodulation or brain stimulation. Different types of brain-writing devices are used, but most use electrodes: just like in sensing but working the other way by converting computer-electric-signals to brain-electric-signals. Other forms of energy used to write the brain include powerful magnets, light, and ultrasound. In treating some brain diseases, brain writing approaches have had their successes, but in specific patient groups.

A device could do both brain-reading and brain-writing; so information flows both ways. For example, a device may read the brain to monitor mental state (how you’re feeling) and then write the brain only when a poor mental state is detected (to make you happier). When a computer both monitors the brain and automatically decides how to change the brain, that's called “closed loop” - a loop between your brain and the machine.

Understanding that all these technologies
A) take the form of either a wearable or implanted product, and
B) that they can read and/or write the brain,
provides simple classification for the different product visions. But of course nothing with the brain is simple - to understand just how ambitious Silicon Valley’s promises are, it is important to recognize how much effort, and with what progress, has already gone into these same types of brain read/write devices.




Implanted brain chips are a product form marketed initially for severely ill patients, such as those with paralyzing brain injuries. But the big sell from Silicon Valley is certainly more cyberpunk than medical device. For implanted devices, it makes sense ethically to first start with patients, and it’s also a practical consideration since you can’t optimize complex technology such self-driving cars without extensive test-track data. There are 86 billion neurons in the brain with over 100 trillion connections (synapse), compared to 4 million miles of road in the United States. Unlike a map, every brain has its own network of neurons. And unlike roads, these connections are changing every second. That means it’s going to take more than the inside brain equivalent of a car-with-roof-camera to map our minds. It’s going to take loads and loads of data before we even know what we’re looking at. For implants, patients are the alpha for providing a glimpse under the hood (skull).

The second product form is a wearable cap that peeks into your brain through your scalp and skull. While this method removes the difficulty of drilling a hole in the head, companies proposing wearable devices are equally bold in performance claims – often teasing consumer product performance that exceeds what is possible with even existing brain implants or massive hospital equipment (e.g. a 50,000 pound liquid-helium-cooled MRI). OpenWater founded by Dr. Mary Lou Jepsen (a former executive at Facebook, Oculus, Google, and Intel) promises MRI-quality brain imaging, 1000x cheaper, that you can wear in a ski hat. Other efforts like Kernel, propose brain-sending caps more grounded in existing research-grade capabilities, but productized for natural environments.

Why even consider brain implants if wearable caps could work? Imagine each neuron in your brain is a person talking in a room (your head), the door is closed, and you’re outside. Implants allow us to drill a hole in the door and slip a bunch of microphones (sensors) into the room to listen to individual conversations. But now there’s a hole in the door with a cable running through it, and also people keep spilling their drinks and breaking the microphones. In contrast, wearable devices are like putting our ears to the door, and then needing to figure out (resolve) who is saying what. In this case, it’s harder to tell what is being discussed, but no harm is done to the door or equipment. The relative benefits and disadvantages of implants versus wearables are basic facts for academics and clinicians who are working on either type of form factor - inherent tradeoffs and compromises. But this is not a contrast raised by Silicon Valley, because doing so would mean acknowledging limitations in whatever approach they have chosen.

These general approaches adopted by Silicon Valley are well-known and explored by brain researchers, clinicians, and medical device companies. Which is for start is what gets the academics’ goat – academics are very protective of their sandboxes. But really, the scope and scale at which Silicon Valley hopes to plug into our brain is much more ambitious than what most academics even aim for. Silicon Valley is proposing to replace the sandbox with a shopping mall.

The scale of possible benefits of brain-plugs seems equally matched by the potential for abuse. Silicon Valley giants have worked hard try to position themselves as benevolent, with various degrees of success. Brain-interface technology that can read/write thoughts will require new safe-guards, independent monitoring, transparency - and ultimately regulation. But even as ethicists and futurists grapple with a whole new set of “cognitive rights” and “brain privacy”, it is worth considering: will Silicon Valley even succeed in making this technology work?

I’m also not going to address here practical issues of user adoption. Such as, will people put something ugly on their head, plug their brains to a computer, or product pricing? It’s reasonable to assume these are topics Silicon Valley does not need lecturing on. My focus on technical feasibility is regardless of looks, individual choice, or cost. Meaning I am asking if Silicon Valley can deliver on promised product performance, regardless of if and how people then choose to adopt the product.

The ambitions of Silicon Valley should be weighed by the successes (and by the limits) of existing brain-interfacing technologies. Implanted technologies have given us videos of paraplegics drinking a soda, but only while supported by a scientific team and for the limited functional life of the implant (implanted devices are not welcomed by the brain environment). Non-invasive brain reading technology is considered exciting when (as work from my colleague Lucas Parra shows) just monitoring brain waves allows rating of movies (tracking Nielsen ratings) or can anticipate boredom. But again, these demonstrations work in a lab, guided by a team of engineers, and so are hardly consumer-ready products. Technologies to change the brain with applied energy have produced transformative benefit, but for a limited set of patients. There is a big gap between technologies that help patients manage disease symptoms and a cap that will teach our brain chess…while we ski.

Silicon Valley points to all the prior achievements of brain technology for their limits. It’s like pointing to trains as the reason flying cars should be made.

To the extent breakthroughs will come through better engineering, Silicon Valley has two levers to pull. The first lever is hardware: with the goal of increasing channels – essentially points of contact between the machine and the brain. Hence the Neuralink sewing machine to gently implant many electrodes. The second lever is software to analyze the messy brain data collected, including leveraging the machine learning proven so successful for finding pictures of “cat”. But these are the exact same technical levers exhaustively pursued by government agencies (including NIH and DARPA), the biomedical device industry, and academic laboratories. How then will Silicon Valley, in a short time-span and by orders of magnitude, exceed the performance of existing devices developed over decades with billions of dollars from governments?

Part 2: A question of feasibility (Past performance does not guarantee future returns)

Does the past success of Silicon Valley juggernauts prove that they can tread to horizons others can hardly dream of? That a select few (wealthy) visionaries possess a special sauce of ingenuity and irreverence to produce world-disrupting technology? Sure. When it comes to the next “it” technology, there is no reason it won’t originate or be acquired by Silicon Valley’s top brands. But with brain-interfacing technology, are the rules and challenges so different as to undermine the Silicon Valley success formula? In part 2, I want to address if academia’s limits in creating brain technology somehow supports Silicon Valley taking the helm of innovation?

Past successes of Silicon Valley hardly came at the expense of government or academic efforts. Social networking, shoes-by-mail, electric cars, and app-based cab hailing were not first failed public ventures - in fact, they were built on existing public infrastructures, like the internet. So Silicon Valley’s unicorns did not rise from the ashes of failed government or foundation supported products. And the human brain may also not be as easy to ensnare with technology as rentable co-working spaces or the vacation-booking market. Can the same culture that spawned the world’s largest companies (founded in college dorms rooms and garages) now produce rapid breakthroughs allowing productized brain-computers interfaces?

Of course, the divide between academia and industry is porous. Just as early efforts to optimize online search engines recruited programmers from universities, Silicon Valley is now hiring neuroscientists and biomedical engineers. In many cases, these companies are openly collaborating with academic and medical centers. Such as Facebook working on brain-typing with the University of California, Johns Hopkins Medicine, and Washington University School of Medicine in St. Louis. My own lab at the City College of New York worked on a brain-technology project with most recognized consumer technology companies. Still, can Silicon Valley productize the previously impossible because they are now unleashed from the restrictions of academia, into a “discovery playground”?

Incidentally, Elon Musk’s advertisement for engineers who don't need to be competent in brain science is not disparaging of the latter. All medical device companies hire traditional engineers to work on appropriate sub-problems. Criticism that Neuralink’s publicity is “neuroscientist theater” for individuals not familiar with past brain-computer-interface technology can be deflected given that the explicit goal of these events was to generate hype for recruitment (and also considering academics themselves are notorious for over-hyping their discoveries). At a minimum, these made-for-YouTube demonstrations show that Silicon Valley can already reproduce past academic successes (even if a decade old), with more productized technology. This “standard” level of performance is not surprising given Silicon Valley recruited scientists from these same academic centers. Silicon Valley demos have not shown paradigm shifts in function, or in resolving the underlying brain science problems. Notwithstanding all of this, some scientists have dissected Neuralink’s publicity video frame-by-frame with attention to background blurry details like fans dissecting movie trailers.

Silicon Valley leverages public mystery about brain function. “I could have a Neuralink right now and you wouldn’t know it,” Elon Musk says. Except in addition to the technology itself not being ready, this is also impossible from a regulatory perspective. This statement was “neuroscience theater”. If Musk had instead announced a mission to Mars and said “I could be speaking from Mars right now” it would have been transparently bombastic.

For any technology that impacts our brain, but exhaustibly so for medical devices, protocols of risk management govern and so effectively throttle the development of technology. The corporate culture required in a medical-device company and that of a fast-pivoting consumer-technology company don't match. The companies that drive us to the hospital and entertain us on the way, are not relied on inside the hospital. The likes of Uber, Apple and Facebook are replaced by the likes of Boston Scientific, Medtronic, and Phillips. The latter, which started as a lightbulb manufacturer, sold that consumer product division to focus on medical technology. Sure, Apple announced FDA clearance for its watch, but this feature is not diagnostic but rather suggests when you should visit the hospital – where it’s then not used by cardiologists.

We believe in rickety start-ups in dorm rooms and garages changing the world and making the impossible first amazing and then mundane – in no time. But the complexity of brain technology makes garage breakthroughs impossible. Silicon Valley’s brain technology “start-ups” are in-house operations or ventures backed by Silicon Valley giants. They operate out of rich research facilities, with PhD scientists (not college dropouts), and hefty contracts to medical research centers and traditional biomedical device manufacturers.

While Silicon Valley can afford to hire top expertise around medical-grade design and regulation, it is unclear how fast certain medical R&D processes can be accelerated through (or around) these regulations and standards. The medical device industry relies heavily on prior technology (“predicates”) to support incremental advances. In other words, the more innovative the technology, the less predicates can be relied on to speed the development and regulatory approval process. For wearable devices that read or write the brain for non-medical purposes (so-called “wellness” uses) that are used for boosting performance, not diagnosing or treating a disease, the FDA may take a backseat. No brain implant would be FDA-exempt. And even brain technology for performance enhancement applications is being re-examined by medical regulators in the US and EU. Other regulatory agencies such as the FCC may also take notice. Ironically, the better these technologies are made to work – and so the more popular they become – the closer regulators will look. No one worried much about regulating self-driving cars or flying drones, until they were made to work.

And there is no “fake it until you make it” when putting devices in peoples’ brains. While for wearable devices that only measure (read) the cost of failure can be limited to annoyed users, for invasive technologies there is inherently substantial risk. In such cases, there may be little room for any safety-related setbacks in human trials. The cost of losing a subject cannot be compared to losing a rocket. It’s closer to an unforced fatal error by a self-driving car. All human trials are governed by strict ethical concerns through independent monitoring, and must balance risk and benefit. There is no suggestion Silicon Valley intends to play it loose with ethics. But how then can Silicon Valley significantly accelerate human testing?




What about the success of pharmaceutical technologies? The world’s governments deferred to big pharma and biotech to develop COVID-19 vaccines. Yet we know the vaccines were a result of foundations laid in academic research over decades (albeit sometimes under pushback from the academic establishment). The vaccine technology, as a platform, already existed. Rapid clinical trials resulted from intense public backing and government support (credit given to government ranges from significant to they-did-not-get-in-the-way). But the point is: Molecular biologists as a whole did not doubt the scientific feasibility of the vaccine development efforts - rather they supported it while acknowledging that only clinical trials could firmly establish safety and efficacy. In contrast, even enthusiastic academics acknowledge there is a large gap between what brain technology can do now and Silicon Valley’s ambitions – what skeptics would call a feasibility gap.

Silicon Valley has made lots of progress on technology to interface people to computers. But all this technology, such as a computer mouse, relies on our hand movements and our eyes on the screen. For all the categorical changes in technology, modern mouse controllers still work on the same principle as the one Steve Jobs saw on his famous visit to Xerox PARC. The feasibility of these and other controllers (keyboards, touch screens) was also caked into the original prototypes. “How well” was addressed through tremendous and innovative engineering, but feasibility was addressed from the start. Silicon Valley’s productization of these technologies from “feasible” to actually useful was no small feat – and exactly what these companies excel at. But a proof-of-principle, no matter how clunky, already showed what could be possible. Certainly, the ability of Silicon Valley to recognize this promise (feasibility), and then acquire and develop it into products is how tech become valuable to society (useful).

What if feasibility is not yet caked in for brain-computer interfaces, at least not for the types of lofty applications Silicon Valley proposes?

Rudimentary consumer brain-reading technology exists, but it functions closer to a mood-ring than a keyboard. Advanced brain computer interfaces remain boutique clinical products, usually available only through clinical trials and subject to ongoing technical support from a large team of experts. In neuromodulation (brain writing), there are more success stories to point to, usually in rather ill patients. Symptoms can be turned off literally with the flick of the switch. Ironically, these platforms are discarded by Silicon Valley as too rudimentary for their vision (so that is in no way taking away from the Silicon Valley luminaries’ glory once they are successful). The point is, by increasing channel count (faster than a Neuralink brain sewing machine), reliance on the established feasibility of prior technologies is likely wiped away. Something very new is built: but can the very process of inventing new brain technology raise further questions on feasibility?

Development of conventional computer controllers has also led to the recognition that occasional failures can make adoption unacceptable. For example, if you get a new fancy keyboard but every tenth time you typed the “A” letter it did not register, you’d go back to your trusted old keyboard. Even with some successes reported for BCI, this is under best in-laboratory cases and not “in the wild” real world use. Indeed, these devices are almost not used outside of experimental academic studies. Even the most clinically successful neuromodulation devices are not universally effective and typically involve a trial period; for implants this trial-period means non-responders are needed to have the device removed. So, while Mark Zuckerberg mused about what a mind-reading wearable could do with just a few bits, biomedical engineers working in sensors know challenges are always about signal to noise, and specifically removing noise that can swamp signals. On the one hand, does our patience for unreliable controllers (such as voice commands to Alexa and Siri) show willingness to accommodate a buggier interface, if it is easier or provides a technology thrill? And what can be easier or cooler than thinking a command? On the other hand, when performance does matter (such as coding, driving) we stick to reliable interfaces.

For all the reasons noted above, past success in consumer tech does not guarantee future performance in brain interfaces. At Facebook’s 2017 meeting for developers that promoted their vision for brain typing, this question was asked by its organizers: “Do you want to work for the company who pioneered putting augmented reality dog ears on teens, or the one that pioneered typing with telepathy?” Indeed, being incredibly successful at the former, does not suggest success in the latter. By summer 2021 that brain-typing program at Facebook was apparently canceled.

Part 3: A secret brain sauce for magical devices?

I am interested in if, from a technical perspective, Silicon Valley can deliver on big promises to wire our brains to computers. Part 1 explained a conceptual framework for brain-interfacing and showed Silicon Valley’s approaches have been tackled by academics for decades - at least in principle, using the same technical levers. Part 2 asks if Silicon Valley’s track record in innovating other types of consumer products, itself supports success in building brain-plugs. In this part 3, I get at the crux of critiques (including from many brain scientists) about the feasibility of Silicon Valley’s biggest promises for brain reading and writing devices.

Past discoveries of technology for the brain were made while interacting with it using existing hardware. For example, Deep Brain Stimulation (DBS), which is a posterchild for brain technology that can heal at the flick of a switch , was discovered serendipitously by neurosurgeons. The discovery of Spinal Cord Stimulation (SCS) came from testing the “gate control” theory of pain using off-the-shelf hardware. Breakthroughs in understanding the brain adapted established technology in ingenious ways; this includes Nobel prize-winning advances in brain science, spanning foundational work circa 1940 from Alan Hodgkin and Andrew Huxley to work circa 1970 by John M. O’Keefe, May-Britt Moser, and Edvard Moser on how the brain has a sense of place and navigation. Even breakthroughs that were clearly technical ones required equal measures of technical ingenuity and insights on brain physiology. This includes major advances in brain sensing, such as a functional MRI which based on understanding brain blood flow tracks brain activity. New approaches to non-invasively read and write the brain arose from clever hardware modifications, be it magnetic induction, ultrasound, or high-definition (HD) transcranial electrical stimulation in my own lab. Technology development followed novel intuition on how to connect to the brain.

Even after discovery, DBS and SCS products were initially developed by adapting existing hardware platforms such as pacemakers. Hardware and software sophistications were introduced in an incremental fashion, driven by clever ideas ( like using feedback from the brain) not simply by increasing device complexity. Thus, while the march toward denser hardware and smarter software is inevitable, they have buttressed rather than heralded the most meaningful breakthrough in brain interfacing technologies. The history of successful brain technology suggests ‘doing less with more’ is a winning strategy. Adding technological complexity for its own sake without a roadmap based on ideas on brain function, may result in neat new scientific tools (toys for scientists) without direct or immediate changes in outcomes.

Breakthroughs in brain interfacing technologies are characterized not by categorical leaps in channel count or artificial intelligence, but by incremental improvements in performance that are driven by insights on how the brain works (and so sticking to technology that is immediately useful). This is alongside a recognition that there is a steep and harsh cost to complexity when interfacing with the body. For example, increasing channel density means smaller electrodes, and smaller electrodes are much less stable over time (the body actively breaks them down).

This tried and tested approach to development of successful brain technologies also recognizes the value of not starting from scratch and, to the extent possibly, relying on or incrementing existing technology. This isn’t just an ethical calculation (minimizing risk to subjects) but a practical one – since when faced with many unknowns (e.g., how an implant electrode material will do inside the body after 20 years), old technology provides a platform to build on. Especially noting some things can’t be rushed simply with more money and “better” thinking – for example, testing the long-term safety of new implant materials can only be accelerated so much.

Which is not to say there are not academic projects and nascent industry efforts that propose paradigm shifts in brain interface technologies. Ideas for brain-interfacing technologies such as the “Stentrode” that can inserted inside the brain’s blood vessel, or the “ Injectode” with electrodes that can take a liquid state, to “Magneto-Electric Nano-Particles”, to watches that stimulate deep in the brain, to numerous approaches to increase channel count or process brain signals. A company I started is among many aiming to transform wearable brain-interfacing. These approaches represent an arc of technology creation from academia to industry, that is reflected by track-records of publications and peer-reviewed government-support - advancing through various stage of human trials and FDA approval. The goals of these efforts are more focused and humble than what Silicon Valley promises, and (reflecting their roots in academic work) more deferential to scientific and technical foundations laid over many years. Silicon Valley’s projects and aspirations therefore must be considered in light on these ongoing efforts, and what unique game-changing cultural or financial approaches Silicon Valley might bring.

Just the idea of increasing channel count, and along with it channel density, is commonplace in brain interfaces. For those working for decades on increasing hardware density, fundamental challenges derive not simply from technical miniaturization but rather from the adverse biological environment: the constantly moving and chemically reacting brain making smaller and smaller interfaces ever less reliable. Nor is the limit on software resolvable with a “bigger” computer. The broader reason academics are almost universally incredulous of Silicon Valley’s promises to “tap the brain” is precisely because breakthroughs are thought to depend first on biological and cognitive neuroscience insights, that in turn guide technology development. The ability to read someone’s brain when they think “cat” depends not simply on training a software, but also understanding how the brain itself represents the thought of “cat”.

A brain implant with a million contacts is not in itself a solution. Indeed, NYU neuroscientist György Buzsáki warns “We have to be careful that new methods [in brain research] will not simply reveal more and more about less and less.” Meaning it is not evident that brute force over-engineering can solve fundamental problems in brain science.

In this sense, maybe Silicon Valley cannot simply over-engineer the problem. Rather technical advancements must accompany, or indeed even follow, brain science insights. If so, the question is: can these companies “out neuroscience” everyone? Can Silicon Valley, with substantial resources that nonetheless pale in comparison to government-funded efforts, solve essential questions in biology and cognitive neuroscience, that then unlock these product applications? The Neuralink website has a “science” section that just has cartoons related to middle school level neuroscience. If Silicon Valley has uncovered secrets about the brain’s function, they are not sharing them. In their defense – since they don't depend on peer-review or government awarded contracts - why should they?

When Twitter quoted Prof. Andrew Jackson of Newcastle University that

Musk tweeted back

A social media zinger but getting Jackson’s point backward: what Neuralink brought ‘to fruition’ achieved nothing new. Also, scientists did not think going to the moon would be trivial, but they did think it was feasible. Also, the lunar mission was directed by NASA, which is closer to an applied academic lab than a for-profit company (albeit with industry serving NASA with products). Moreover, the cost of the moon landing was about $280 billion (in today’s dollars); about 1000-times Neuralink’s current funding of $363 million. The National Institutes of Health pours ~$40 billion per year into brain research with humbler goals than Neuralink. The idea of going into the brain is trivial, doing it is really hard - harder and more expensive even than a round-trip to the moon.




The story of Jeffery Hawkins seems telling. Hawkins became interested in the brain and cognition early in his academic life, but found little support in academia to pursue big questions about cognition. In the 1990’s Hawkins took a detour co-founding and running Palm Inc. and Handspring, before returning to his passion of brain science by founding the Redwood Center for Theoretical Neuroscience in 2002 and Numenta in 2005. Numenta is a brain-inspired machine learning company (software), which isn’t burdened by developing brain-plug hardware. One gets the sense Hawkins successful decade-long detour in consumer high-tech was “easy” compared to the ongoing decades-long challenge of developing technology based on the brain. So maybe, just maybe, technology that interfaces with the brain requires an order of effort and innovation beyond that needed to make handheld computers, electric cars, and social media platforms.

Paul Allen, who co-founded Microsoft in 1975, invested ~$500 million in 2012 in the Allen Institute for Brain Science and the Allen Institute for the Artificial Intelligence. For a decade, hundreds of scientists at the Allen Institute have worked to explain the brain - and even this effort pales in the scale of other ongoing government and foundations efforts. A single grant from the European Union provided ~$1.5 billion to a project led by Henry Markham to create a computer model of the brain – a project that disappointed after 10 years. Making technology that requires a profound interface with the brain (like reading and writing thoughts) is much harder than coding an operating system, an online payment system like PayPal, or video streaming. Brain-interfaces not only take much more time and money, they may also simply not work.

Let’s assume understanding of the brain, from small molecular scales to large scale of thoughts, is sufficiently undeveloped so to severely throttle breakthroughs in brain technology. How will Silicon Valley’s new hardware work? In one scenario, brute force over-engineering of brain technology (using increasingly dense hardware and smarter software) can succeed by overcoming lack of insight on brain function. So breakthrough device performance is achieved without unlocking the mysteries of the brain instead of treating the brain as a black box we wrap in wires. In another scenario, the very process of creating this technology will crack open these mysteries of the mind. Here, resolving the deepest puzzles on brain function that are just one hardware channel and one line of code away. A third option involves a killer app that still avoids the need for precise reading or writing the brain – perhaps something that acts more like power-steering than a self-driving car. Performance derived from symbiosis between brain and computer.

All these scenarios are at least plausible. What does Silicon Valley need to know about the brain, and what does it not, to make products? The first two scenarios suggest developing engineering solutions without an understanding of the underlying problem. This appears to be the exact opposite of the winning approach of Silicon Valley, at least as recounted by its luminaries: that it was the solution first (arriving in a flash of entrepreneurial insight) that then drafted the necessary technology. Technology from “first principles”. In the third scenario, we have yet to hear anything new from Silicon Valley: They are tackling the same problems (mental illness, typing with the brain) with the same general approaches (a brain-computer plug).

Maybe tackling the same problems by critically improving other’s approaches can be a recipe for success. Microsoft borrowed Apple’s window interface. Apple made Xerox’s computer mouse usable. Tesla made better electric cars. These approaches are made possible by the incredible ingenuity of engineers that challenge technology conventions and lead to results described as wizardry. With all the resources Silicon Valley can pour into brain technology, their efforts will inevitably advance brain interface technology. Especially when internal prototypes and vaporware are made public through product releases or rigorous scientific disclosures – not just made-for-streaming scientific demos or publicity publications. But will these efforts lead to consumer products with magical performance?

To be clear, there are no good suggestions that Silicon Valley’s brain-interfacing efforts are a house of cards, or a big scientific lie. It is because they are based on decades of science and technology advancement, with compelling hints of performance, and/or a healthy willingness to cancel under-preforming projects, not to mention the corporate and individual reputation, that even skeptics admit there is “some there, there”. These companies have hired the best people for academia and industry, with stellar reputations (albeit with some revolving door of employment). What is harder to understand, certainly without insight into internal projects and plans, is what to expect for magical results?




Is there some argument to intentionally slow down development, effectively blocking innovation? I would put aside the ethical dimension on misuse, and simply ask can Silicon Valley be faulted for trying to accelerate the creation of functional brain-plugs? Certainly, patients don't have the luxury of decades-long R&D programs. And at some point, some R&D programs become so slow and incremental, that the horizon for real breakthrough moves outside our life times - disrupters are then needed. For me, the potential for Silicon Valley to accelerate the development of brain interfacing technologies in general, in a way that ripples into academic and conventional medical device efforts, seems to be the basic reason to cheer their success. Here Silicon Valley’s irreverence (not admitting past struggles throttle rapid technology creation) and impatience (the need to promise results in short years) are virtues. So it would then be a bad sign if the release date for these products starts to slip away, towards the every-receding horizon.

For all the concern I lay out here about the feasibility of brain-plugs with transhuman functionality, I am convinced advances in brain technology can be transformative; because existing technology already is. For two decades I’ve steered my own lab’s research directly toward making better brain interfaces, with a focus on reducing the burden of brain disease. Such as giving a person with chronic migraines a day free of pain. Helping someone with multiple sclerosis stay sharp so they can keep working. Showing a blind toddler the way. And maybe help everyone be a little more creative , stay focused, or mentally age better . In Scientific American I’ve mused on a future with “ electric pharmacies” and my TEDx talk was exactly on brain caps with dials to change how we think and feel. So, when Silicon Valley talks about brain-interfaces they “have me at hello”. And in disclosure, both brand-name and startup companies have engaged my lab on brain technology projects. Expanding uses of brain-plugs in medicine and in everyday life (work, play) seems inevitable. But in what ways will Silicon Valley smash open new opportunities that make brain-plugs as ubiquitous (and as useful) as coffee, food delivery, and computer games?

Criticism fades to dust and skeptics become customers, when Silicon Valley delivers “magic”. Silicon Valley does not need to publish in academic journals because success (product adoption) depends only on the phone unlocking with your face, not an academic paper on the limits of facial recognition. But that means we will wait and see what products will be delivered. Brain-plugs for streaming thoughts, shopping malls for the mind, and cures for brain diseases will or will not arrive at our door – the rest is academic.





Share |


Read this next



Submit an article to Neuromodec

Neuromodec is currently accepting contributions to be featured on our News page.



Subscribe to stay up to date about the latest in Neuromodulation