Delivering an amazing life breakthrough in your intelligence
In the coming decade, we may soon begin connecting our brains to an AI.
Elon Musk’s company Neuralink just announced groundbreaking progress on its “Brain-Computer Interface” (BCI) technology, striving towards a 2 gigabit-per-second wireless connection between a patient’s brain and the cloud in the next few years.
Initial human trials are expected by the end of 2020. Long-term, Elon expects BCI installation to be as painless and simple as LASIK surgery (a thirty-minute visit, no stitches or general anesthesia required).
Over a decade ago, Ray Kurzweil predicted that our brains would seamlessly connect to the cloud by 2035. Even considering his 86% prediction accuracy rate, this prediction seemed somewhat ambitious. But Neuralink’s recent announcement adds significant credence to Ray’s prediction and timeline.
In the long-term, the implications of high-bandwidth BCI are extraordinary. Nothing is more important to a company, nation, or individual than intelligence. It is the fundamental key to problem-solving and wealth creation, and underpins the human capital that drives every company and nation forward.
BCIs will ultimately make the resource of human intelligence massively abundant.
In this blog, I’ll be exploring:
- Neuralink’s groundbreaking advancements;
- Roadmaps for BCI;
- Implications of human capital abundance & the future of intelligence.
Let’s plug in…
Neuralink Update
Beyond the pioneering technology itself, Neuralink has a compelling business plan.
The company’s brain implants, connected via Bluetooth to an external controller, are designed to first treat patients with cervical fractures and neurological disorders, allowing them to restore somewhat normal function. Long-term, they will be made available to the general population for enhanced capability, or to enable AI enhancement of our brain.
In the company’s first public announcement, Elon outlined three main goals of Neuralink’s device:
- Increase by orders of magnitude the number of neurons you can read from and write to in safe, long-lasting ways;
- At each stage, produce devices that serve critical unmet medical needs of patients;
- Make it as simple and automated as LASIK.
The three-pound organ within our skulls that we call the brain is composed of 100 billion neurons and 100 trillion synapses, encompassing everything we see, feel, hear, taste, and remember. Everything that makes me, me, and everything that makes you, you.
In the near-term, Neuralink aims to restore function to those patients who have suffered brain and spinal injuries, helping reinstate their ability to feel and regain motor agency. Beyond such use cases, however, Neuralink ultimately strives to achieve a full “symbiosis with AI,” according to Elon. He makes the important distinction, however, that merging with AI will be an option — not a requirement — in the future.
BCI devices will serve as the brain’s tertiary “digital superintelligence layer,” a layer we arguably already experience in the form of phones, laptops, wearables, and the like.
Yet as explained by Elon, “the constraint is how well you interface — the input and the output speeds. You have a very slow output speed, with typing on keys. Your input speed is faster due to vision.”
Neuralink will eradicate these barriers to speed, providing instantaneous, seamless access to an abundance of knowledge, processing power, and even sensory experience.
Understanding the Hardware
One breakthrough enabling Neuralink’s technology is the development of flexible electrode “threads” with a diameter measuring one-tenth the width of a human hair (4 – 6 μm in width, or the approximate width of a neuron). These can be inserted into the uppermost levels of the human cortex and interface (read & write) with neurons.
1,024 of these threads attach to a single small Neuralink chip (“N1”) that is embedded into the skull, just below your scalp. Each of the N1 chips collects and transmits 200Mbps of neural data, and up to 10 such chips (implanted into a patient) allow for the grand total of a 2Gbps wireless connection. The wireless connection is then made via Bluetooth to an ear-mounted device that connects this brain data to the cloud.
Enter an era wherein users can control their brain implants via an iPhone app. Or imagine the 2030 generation of iPhones (if iPhones are still around), revamped to include a separate App Store: Brain Edition.
Given the threads’ infinitesimal size, large number and flexibility, Neuralink had to developed a special purpose, high-precision robot to perform the thread insertion procedure.
Within the procedure, a mere 2mm incision in the scalp and skull is needed for each implant, small enough to be closed with crazy glue. Minimizing risk of brain trauma, the robot’s 24-micron needle is designed to precisely place threads and avoid damaging blood vessels. In initial quadriplegic patients, one array will reside in the somatosensory region of the brain and three in the motor cortex.
As summed up by lead Neuralink surgeon Dr. Matthew MacDougall, “We developed a robotic inserter that can rapidly and precisely insert hundreds of individual threads, representing thousands of distinct electrodes, into the cortex in under an hour.”
Progress in Neuralink’s labs has been fast and furious. Over the past two years, the size-to-performance ratio of Neuralink’s electrodes has improved seven-fold.
Recalling Ray Kurzweil’s prediction of high-speed BCI by 2035 (only 15 years from now), how far can the technology go in this short timeframe?
Well, let’s consider that if chip performance doubles every two years, we are about to witness a 128X improvement in the technology over the next 15 years.
For perspective, remember that the first-generation iPhone was only released in 2007 — just a dozen years ago — and look how far that technology has traveled!
Bolstered by converging exponential technologies, BCIs will undoubtedly experience massive transformation in the decade ahead.
But Neuralink is not alone….
While there are likely dozens of other top-secret BCI government ventures taking place in the U.S., China, and Russia, to name a few countries, here are some of the key players driving the industry in the U.S.:
(1) Kernel is currently working on a “noninvasive mind/body/machine interface (MBMI)” that will be able to receive signals from neurons in far greater numbers than the 100 neurons that current neuromodulators can stimulate.
Kernel’s CEO and my friend Bryan Johnson aims to initially use the neuroprosthetic to treat disorders such as Alzheimer’s, strokes, and concussions. Yet long-term, Johnson envisions the technology will also help humans keep up with the rapid advancement of computation.
(2) Facebook announced in 2017 its work on a noninvasive BCI that would integrate with the company’s augmented reality headset, providing a “brain click” function at the most basic level. According to Zuckerberg, the BCI can already distinguish if a user is thinking about an elephant or a giraffe, and it will ultimately be used for type-to-text communication.
“Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world—speech—can only transmit about the same amount of data as a 1980s modem. We’re working on a system that will let you type straight from your brain about 5X faster than you can type on your phone today,” as explained by Zuckerberg in a post.
(3) CTRL-Labs, a startup founded by the creator of Microsoft Internet Explorer Thomas Reardon and his partners, is now developing a BCI moderated through a wristband that detects voltage pulses from muscle contractions.
The group aims to eventually detect individual clusters of muscle cells so that users can link imperceptible movements to a variety of commands.
(4) One of the earliest BCI benefactors, DARPA has funded BCI research since the 1970s, aiming to use the technology in recovery and enhancement. Yet recent advancements remain under wraps.
(5) While most of the invasive BCI technologies mentioned here await human trials, BrainGate has already demonstrated success in humans. In one iteration of their technology, researchers implanted 1 – 2 electrodes in the brains of three paralyzed patients. The implants allowed all three to move a cursor on a screen by simply thinking about moving their hands. One participant even recorded eight words per minute.
This astounding feat, possible with just two electrodes, suggests tremendous promise for the thousands of electrodes that Elon plans to achieve in Neuralink’s devices. While FDA approval for human trials will likely take time (Neuralink has primarily tested their technology in mice and a few monkeys), use in human therapeutics is now finally on the horizon.
How much time?
Financial analysts forecast a $27 billion market for neural devices within the next six years. Elon anticipates reaching human trials by the end of next year. And by 2035, the technology is set to achieve low-cost, widespread adoption.
Neuralink’s high-bandwidth brain connection will exponentially transform information accessibility. Thought-to-speech technology will allow us to control avatars — both digital and robotic — directly with our minds.
We will not only upload photos and conversations to the cloud, but entire memories, ideas, and abstract thought. Say goodbye to Google search and 2D screen-confined engines as we adapt to querying directly from our brains.
And for those of you worried about Terminator-like scenarios of AI’s destruction of the human race, BCI will offer us the potential to join tomorrow’s intelligence revolution, rather than be crushed by it.
Closing Thoughts…
Every human today is composed of ~40 trillion cells that all function together in a collaborative fashion, constituting you, me, and every person alive.
One of the most profound and long-term implications of BCI is its ability to interconnect all of our minds. To share our thoughts, memories, and actions across all of humanity.
Imagine just for a moment: a future society in which each of us are connected to the cloud through high-bandwidth BCI, allowing the unfiltered sharing of feelings, memories and thoughts.
Imagine a kinder and gentler version of the Borg (from Star Trek), allowing the linking of 8 billion minds via the cloud and reaching a state of transformative human intelligence.
For those concerned about the domination of AI (i.e. the Terminator scenario), take some comfort in the notion that it isn’t AI versus humans alone. A new version of Human Augmented Intelligence (HI) is just around the corner.
Our evolution from screens to augmented reality glasses to brain-computer interfaces is already beginning. Prepare for the accelerating pace of groundbreaking HI.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
artificialintelligence #AI #innovation #HR #executive #business #CXO #CEO #CFO #CIO #executive #success #work #follow #leadership #corporate #office #luxury #entrepreneur #coaching #businessman #professional #aviation #excellence #development Contributor: Peter Diamandis #motivation #InvestmentCapitalGrowth
The Future of Entertainment. I think you’ll be surprised!
The Future of Entertainment. I think you’ll be surprised!
Twenty years ago, entertainment was dominated by a handful of producers and monolithic broadcasters, a near-impossible market to break into. Today, the industry is almost entirely dematerialized, while storytellers and storytelling mediums explode in number. And this is just the beginning.
Netflix turned entertainment on its head practically overnight, shooting from a market cap of US$8 billion in 2010 (the same year Blockbuster filed for bankruptcy) to a record US$185.6 billion only 8 years later. This year, it is expected to spend a whopping 15 billion on content alone.
Meanwhile, VR platforms like Google’s Daydream and Oculus have only begun bringing the action to you, while mixed reality players like Dreamscape will forever change the way we experience stories, exotic environments and even classrooms of the future.
In the words of Barry Diller, a former Fox and Paramount executive and the chairman of IAC, “Hollywood is now irrelevant.”
In this two-part series, I’ll be diving into three future trends in the entertainment industry: AI-based content curation, participatory story-building, and immersive VR/AR/MR worlds.
Today, I’ll be exploring the creative future of AI’s role in generating on-demand, customized content and collaborating with creatives, from music to film, in refining their craft.
Let’s dive in!
AI Entertainment Assistants
For many of us, film brought to life our conceptions of AI, from Marvel’s JARVIS to HAL in 2001: A Space Odyssey.
And now, over 50 years later, AI is bringing stories to life like we’ve never seen before.
Converging with the rise of virtual reality and colossal virtual worlds, AI has begun to create vastly detailed renderings of dead stars, generate complex supporting characters with intricate story arcs, and even bring your favorite stars — whether Marlon Brando or Amy Winehouse — back to the big screen and into a built environment.
While still in its nascent stages, AI has already been used to embody virtual avatars that you can converse with in VR, soon to be customized to your individual preferences.
But AI will have far more than one role in the future of entertainment as industries converge atop this fast-moving arena.
You’ve likely already seen the results of complex algorithms that predict the precise percentage likelihood you’ll enjoy a given movie or TV series on Netflix, or recommendation algorithms that queue up your next video on YouTube. Or think Spotify playlists that build out an algorithmically refined, personalized roster of your soon-to-be favorite songs.
And AI entertainment assistants have barely gotten started.
Currently the aim of AIs like Google’s Assistant or Huawei’s Xiaoyi (a voice assistant that lives inside Huawei’s smartphones and smart speaker AI Cube), AI advancements will soon enable your assistant to search and select songs based on your current and desired mood, movies carefully picked out to bridge you and your friends’ watching preferences on a group film night, or even games whose characters are personalized to interact with you as you jump from level to level.
Or even imagine your own home leveraging facial technology to assess your disposition, cross-reference historical data on your entertainment choices at a given time or frame of mind, and automatically queue up a context-suiting song or situation-specific video for comic relief.
Curated Content Generators
Beyond personalized predictions, however, AIs are now taking on content generation, multiplying your music repertoire, developing entirely new plotlines, and even bringing your favorite actors back to the screen or — better yet — directly into your living room.
Take AI motion transfer, for instance.
Employing the machine learning subset of generative adversarial networks (GAN), a team of researchers at UC Berkeley has now developed an AI motion transfer technique that superimposes the dance moves of professionals onto any amateur (‘target’) individual in seamless video.
By first mapping the target’s movements onto a stick figure, Caroline Chan and her team create a database of frames, each frame associated with a stick-figure pose. They then use this database to train a GAN and thereby generate an image of the target person based on a given stick-figure pose.
Map a series of poses from the source video to the target, frame-by-frame, and soon anyone might moonwalk like Michael Jackson, glide like Ginger Rogers or join legendary dancers on a virtual stage.
Somewhat reminiscent of AI-generated “deepfakes,” the use of generative adversarial networks in film could massively disrupt entertainment, bringing legendary performers back to the screen and granting anyone virtual stardom.
Just as digital artists increasingly enhance computer-generated imagery (CGI) techniques with high-fidelity 3D scanning for unprecedentedly accurate rendition of everything from pores to lifelike hair textures, AI is about to give CGI a major upgrade.
Fed countless hours of footage, AI systems can be trained to refine facial movements and expressions, replicating them on any CGI model of a character, whether a newly generated face or iterations of your favorite actors.
Want Marilyn Monroe to star in a newly created Fast and Furious film? No problem! Keen to cast your brother in one of the original Star Wars movies? It might soon be as easy as contracting an AI to edit him in, ready for his next Jedi-themed birthday.
Companies like Digital Domain, co-founded by James Cameron, are hard at work to pave the way for such a future. Already, Digital Domain’s visual effects artists employ proprietary AI systems to integrate humans into CGI character design with unparalleled efficiency.
As explained by Digital Domain’s Digital Human Group director Darren Handler, “We can actually take actors’ performances — and especially facial performances — and transfer them [exactly] to digital characters.
And this weekend, AI-CGI cooperation took center stage in Avengers: Endgame, seamlessly recreating facial expressions on its villain Thanos.
Even in the realm of video games, upscaling algorithms have been used to revive childhood classic video games, upgrading low-resolution features with striking new graphics.
One company that has begun commercializing AI upscaling techniques is Topaz Labs. While some manual craftsmanship is required, the use of GANs has dramatically sped up the process, promising extraordinary implications for gaming visuals.
But how do these GANs work? After training a GAN on millions of pairs of low-res and high-res images, one part of the algorithm attempts to build a high-resolution frame from its low-resolution counterpart, while the second algorithm component evaluates this output. And as the feedback loop of generation and evaluation drives the GAN’s improvement, the upscaling process only gets more efficient over time.
“After it’s seen these millions of photos many, many times it starts to learn what a high resolution image looks like when it sees a low resolution image,” explained Topaz Labs CTO Albert Yang.
Imagine a future in which we might transform any low-resolution film or image with remarkable detail at the click of a button.
But it isn’t just film and gaming that are getting an AI upgrade. AI songwriters are now making a major dent in the music industry, from personalized repertoires to melody creation.
AI Songwriters and Creative Collaborators
While not seeking to replace your favorite song artists, AI startups are leaping onto the music scene, raising millions in VC investments to assist musicians with creation of novel melodies and underlying beats… and perhaps one day with lyrics themselves.
Take Flow Machines, a songwriting algorithm already in commission. Now used by numerous musical artists as a creative assistant, Flow Machines has even made appearances on Spotify playlists and top music charts.
And startups are fast following suit, including Amper, Popgun, Jukedeck and Amadeus Code.
But how do these algorithms work? By processing thousands of genre-specific songs or an artist’s genre-mixed playlist, songwriting algorithms are now capable of optimizing and outputting custom melodies and chord progressions that interpret a given style. These in turn help human artists refine tunes, derive new beats, and ramp up creative ability at scales previously unimaginable.
As explained by Amadeus Code’s founder Taishi Fukuyama, “History teaches us that emerging technology in music leads to an explosion of art. For AI songwriting, I believe [it’s just] a matter of time before the right creators congregate around it to make the next cultural explosion.”
Envisioning a future wherein machines form part of the creation process, Will.i.am has even described a scenario in which he might tell his AI songwriting assistant, “Give me a shuffle pattern, and pull up a bass line, and give me a Bootsy Collins feel…”
AI: The Next Revolution in Creativity
Over the next decade, entertainment will undergo its greatest revolution yet. As AI converges with VR and crashes into democratized digital platforms, we will soon witness the rise of everything from edu-tainment, to interactive game-based storytelling, to immersive worlds, to AI characters and plot lines created on-demand, anywhere, for anyone, at almost zero cost.
We’ve already seen the dramatic dematerialization of entertainment. Streaming has taken the world by storm, as democratized platforms and new broadcasting tools birth new convergence between entertainment and countless other industries.
Posing the next major disruption, AI is skyrocketing to new heights of creative and artistic capacity, multiplying content output and allowing any artist to refine their craft, regardless of funding, agencies or record deals.
And as AI advancements pick up content generation and facilitate creative processes on the back end, virtual worlds and AR/VR hardware will transform our experience of content on the front-end.

In our next blog of the series, we’ll dive into mixed reality experiences, VR for collaborative storytelling, and AR interfaces that bring location-based entertainment to your immediate environment.
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
Contributor: Peter Diamandis #innovation #invent #inventor #engineer #Entrepreneur #AI #ArtificialIntelligence #VC #WSJ