How Augmented Reality (AR) will change your industry

Augmented Reality (AR) has already exceeded over 2,000 AR apps on over 1.4 billion active iOS devices. Even if on a rudimentary level, the technology is now permeating the consumer products space.
And in just the next four years, the International Data Corporation (IDC) forecasts AR headset production will surge 141 percent each year, reaching a whopping 32 million units by 2023.
AR will soon serve as a surgeon’s assistant, a sales agent, and an educator, personalized to your kids’ learning patterns and interests.
In this fourth installment of our five-part AR series, I’m doing a deep dive into AR’s most exciting industry applications, poised to hit the market in the next 5-10 years.
Let’s dive in.
Healthcare
(1) Surgeons and physicians:
Whether through detailed and dynamic anatomical annotations or visualized patient-specific guidance, AR will soon augment every human medical practitioner.
To start, AR is already being used as a diagnosis tool. SyncThink, recently hired by Magic Leap, has developed eye-tracking technology to diagnose concussions and balance disorders. Yet another startup, XRHealth, launched its ARHealth platform on Magic Leap to aid in rehabilitation, pain distraction, and psychological assessment.

Moreover, surgeons at the Imperial College London have used Microsoft’s HoloLens 1 in pre-operative reconstructive and plastic surgery procedures, which typically involves using CT scans to map blood vessels that supply vital nutrients during surgery.
As explained by the project’s senior researcher, Dr. Philip Pratt, “With the HoloLens, we’re now doing the same kind of [scan] and then processing the data captured to make it suitable to look at. That means we end up with a silhouette of a limb, the location of the injury, and the course of the vessels through the area, as opposed to this grayscale image of a scan and a bit more guesswork.”
Dramatically lowering associated risks, AR can even help surgeons visualize the depth of vessels and choose the optimal incision location.
And while the HoloLens 1 was only used in pre-op visualizations, Microsoft’s HoloLens 2 is on track to reach the operating table. Take Philips’ Azurion image-guided therapy platform, for instance. Built specifically for the HoloLens 2, Azurion strives to provide surgeons with real-time patient data and dynamic 3D imagery as they operate.
Moreover, AR headsets and the virtual overlays they provide will exponentially improve sharing of expertise across hospitals and medical practices. Niche medical specialists will be able to direct surgeons remotely from across the country (not to mention the other side of the planet), or even view annotated AR scans to offer their advice.
Magic Leap, in its own right, is now collaborating with German medical company Brainlab to create a 3D spatial viewer that would allow clinicians to work together in surgical procedures across disciplines.

But beyond democratizing medical expertise, AR will even provide instantaneous patient histories, gearing doctors with AI-processed information for more accurate diagnoses in a fraction of the time.
By saving physicians’ time, AR will therefore free doctors to spend a greater percentage of their day engaging in face-to-face contact with their patients, establishing trust, compassion, and an opportunity to educate healthcare consumers (rather than merely treating them).
And when it comes to digital records, doctors can simply use voice control to transcribe entire interactions and patient visits, multiplying what can be done in a day, and vastly improving the patient experience.
(2) Assistance for those with disabilities:
Today, over 3.4 million visually impaired individuals reside in the U.S. alone. But thanks to new developments in the AI-integrated smart glasses realm, associated constraints could soon fade in severity.
And new pioneers continue to enter the market, including NavCog, Horus, AIServe, and MyEye, among others. Microsoft has even begun development of a “Seeing AI” app, which translates the world into audio descriptions for the blind, as seen through a smartphone’s camera lens.

During the Reality Virtual Hackathon in January, hosted by Magic Leap at MIT, two of the top three winners catered to disabilities. CleARsite provided environment reconstruction, haptic feedback, and Soundfield Audio overlay to enhance a visually impaired individual’s interaction with the world. Meanwhile, HeAR used a Magic Leap 1 headset to translate vocals or sign language into readable text in speech bubbles in the user’s field of view. Magic Leap remains dedicated to numerous such applications, each slated to vastly improve quality of life.
(3) Biometric displays:
In biometrics, cyclist sunglasses and swimmer goggles have evolved into the perfect medium for AR health metric displays. Smart glasses like the Solos ($499) and Everysight Raptors ($599) provide cyclists with data on speed, power, and heart rate, along with navigation instructions. Meanwhile, Form goggles ($199)—just released at the end of August—show swimmers their pace, calories burned, distance, and stroke count in real-time, up to 32 feet underwater.

Accessible health data will shift off our wrists and into our fields of view, offering us personalized health recommendations and pushing our training limits alike.
Retail & Advertising
(1) Virtual shopping:
The year is 2030. Walk into any (now AI-driven, sensor-laden, and IoT-retrofitted) store, and every mannequin will be wearing a digital design customized to your preferences. Forget digging through racks of garments or hunting down your size. Cross-referencing your purchase history, gaze patterns, and current closet inventory, AIs will display tailor-made items most suitable for your wardrobe, adjusted to your individual measurements.

An app available on most Android smartphones, Google Lens is already leaping into this marketplace, allowing users to scan QR codes and objects through their smartphone cameras. Within the product, Google Lens’s Style Match feature even gives consumers the capability to identify pieces of clothing or furniture and view similar designs available online and through e-commerce platforms.
(2) Advertising:
And these mobile AR features are quickly encroaching upon ads as well.
In July, the New York Times debuted an AR ad for Netflix’s “Stranger Things,” for instance, guiding smartphone users to scan the page with their Google Lens app and experience the show’s fictional Starcourt Mall come to life.

But immersive AR advertisements of the future won’t all be unsolicited and obtrusive. Many will likely prove helpful.
As you walk down a grocery store aisle, discounts and special deals on your favorite items might populate your AR smart glasses. Or if you find yourself admiring an expensive pair of pants, your headset might suggest similar items at a lower cost, or cheaper distributors with the same product. Passing a stadium on the way to work, next weekend’s best concert ticket deals might filter through your AR suggestions—whether your personal AI intends them for your friend’s upcoming birthday or your own enjoyment.
Instead of bombarding you at every turn on a needed handheld device, ads will appear only when most relevant to your physical surroundings— or toggle them off, and have your personal AI do the product research for you.
Education & Travel
(1) Customized, continuous learning:
The convergence of today’s AI revolution with AR advancements gives us the ability to create individually customized learning environments.
Throw sensors in the mix for tracking of neural and physiological data, and students will soon be empowered to better mediate a growth mindset, and even work towards achieving a flow state (which research shows can vastly amplify learning).

Within the classroom, Magic Leap One’s Lumin operating system allows multiple wearers to share in a digital experience, such as a dissection or historical map. And from a collaborative creation standpoint, students can use Magic Leap’s CAD application to join forces on 3D designs.
In success, AR’s convergence with biometric sensors and AI will give rise to an extraordinarily different education system: one comprised of delocalized, individually customizable, responsive, and accelerated learning environments.
Continuous and learn-everywhere education will no longer be confined to the classroom. Already, numerous AR mobile apps can identify objects in a user’s visual field, instantaneously presenting relevant information. As user interface hardware undergoes a dramatic shift in the next decade, these software capabilities will only explode in development and use.
Gazing out your window at a cloud will unlock interactive information about the water cycle and climate science. Walking past an old building, you might effortlessly learn about its history dating back to the sixteenth century. I often discuss information abundance, but it is data’s accessibility that will soon drive knowledge abundance.
(2) Training:
AR will enable on-the-job training at far lower costs in almost any environment, from factories to hospitals.
Smart glasses are already beginning to guide manufacturing plant employees as they learn how to assemble new equipment. Retailers stand to decimate the time it takes to train a new employee with AR tours and product descriptions.
And already, automotive technicians can better understand the internal components of a vehicle without dismantling it. Jaguar Land Rover, for instance, has recently implemented Bosch’s Re’flekt One AR solution. Training technicians with “x-ray” vision, the AR service thereby allows them to visualize the insides of Range Rover Sport vehicles without removing their dashboards.
In healthcare, medical students will be able to practice surgeries on artificial cadavers with hyper-realistic AR displays. Not only will this allow them to rapidly iterate on their surgical skills, but AR will dramatically lower the cost and constraints of standard medical degrees and specializations.
Meanwhile, sports training in simulators will vastly improve with advanced AR headset technology. Even practicing chess or piano will be achievable with any tabletop surface, allowing us to hone real skills with virtual interfaces.
(3) Travel:
As with most tasks, AI’s convergence with AR glasses will allow us to outsource all the most difficult (and least enjoyable) decisions associated with travel, whether finding the best restaurants or well-suited local experiences.
But perhaps one of AR’s more sophisticated uses (already rolling out today) involves translation. Whether you need to decode a menu or access subtitles while conversing across a language barrier, instantaneous translation is about to improve exponentially with the rise of AI-powered AR glasses. Even today, Google Translate can already convert menu text and street signs in real time through your smartphone.
Manufacturing
As I explored last week, manufacturing presents the nearest-term frontier for AR’s commercial use. As a result, many of today’s leading headset companies—including Magic Leap, Vuzix, and Microsoft—are seeking out initial adopters and enterprise applications in the manufacturing realm.

(1) Design:
Targeting the technology for simulation purposes, Airbus launched an AR model of the MRH-90 Taipan aircraft just last year, allowing designers and engineers to view various components, potential upgrades, and electro-optical sensors before execution. Saving big on parts and overhead costs, Airbus thereby gave technicians the opportunity to make important design changes without removing their interaction with the aircraft.
(2) Supply chain optimization:
AR guidance linked to a centralized AI will also mitigate supply chain inefficiencies. Coordinating moving parts, eliminating the need to hold a scanner at each checkpoint, and directing traffic within warehouses will vastly improve workflow.
After initially implementing AR “vision picking” in 2015, leading supply company DHL recently announced it would continue to use the newest Google smart lens in warehouses across the world. Or take automotive supplier ZF, which has now rolled out use of the HoloLens in plant maintenance.

(3) Quality assurance & accessible expertise:
AR technology will also play a critical role in quality assurance, as it already does in Porsche’s assembly plant in Leipzig, Germany. Whenever manufacturers require guidance from engineers, remote assistance is effectively no longer remote, as equipment experts guide employees through their AR glasses and teach them on the job.
Transportation & Navigation
(1) Autonomous vehicles:
To start, Nvidia’s Drive platform for Level 2+ autonomous vehicles is already combining sensor fusion and perception with AR dashboard displays to alert drivers of road hazards, highlight points of interest, and provide navigation assistance.

And in our current transition phase of partially autonomous vehicles, such AR integration allows drivers to monitor conditions yet eases the burden of constant attention to the road. Along these lines, Volkswagen has already partnered with Nvidia to produce I.D. Buzz electric cars, set to run on the Drive OS by 2020. And Nvidia’s platform is fast on the move, having additionally partnered with Toyota, Uber, and Mercedes-Benz. Within just the next few years, AR displays may be commonplace in these vehicles.
(2) Navigation:

We’ve all seen (or been) that someone spinning around with their smartphone to decipher the first few steps of a digital map’s commands. But AR is already making everyday navigation intuitive and efficient.
Google Maps’ AR feature has already been demoed on Pixel phones: instead of staring at your map from a bird’s eye view, users direct their camera at the street, and superimposed directions are immediately layered virtually on top.
Not only that, but as AI identifies what you see, it instantaneously communicates with your GPS to pinpoint your location and orientation. Although a mainstream rollout date has not yet been announced, this feature will likely make it to your phone in the very near future.
Entertainment
(1) Gaming:
We got our first taste of AR’s real-world gamification in 2016, when Nintendo released Pokémon Go. And today, the gaming app has now surpassed 1 billion downloads. But by contrast to VR, AR is increasingly seen as a medium for bringing gamers together in the physical world, encouraging outdoor exploration, activity, and human connection in the process.
And in the recently exploding eSports industry, AR has the potential to turn player’s screens into live action stadiums. Just this year, the global eSports market is projected to exceed US$1.1 billion in revenue, and AR’s potential to elevate the experience will only see this number soar.
(2) Art:
Many of today’s most popular AR apps allow users to throw dinosaurs into their surroundings (Monster Park), learn how to dance (Dance Reality), or try on highly convincing virtual tattoos (InkHunter).
And as high-definition rendering becomes more commonplace, art will, too, grow more and more accessible.
Magic Leap aims to construct an entire “Magicverse” of digital layers superimposed on our physical reality. Location-based AR displays, ranging from art installations to gaming hubs, will be viewable in a shared experience across hundreds of headsets. Individuals will simply toggle between modes to access whichever version of the universe they desire. Endless opportunities to design our surroundings will arise.
Apple, in its own right, recently announced the company’s [AR]T initiative, which consists of floating digital installations. Viewable through [AR]T Viewer apps in Apple stores, these installations can also be found in [AR]T City Walks guiding users through popular cities, and [AR]T Labs, which teach participants how to use Swift Playgrounds (an iPad app) to create AR experiences.
(3) Shows:
And at the recent Siggraph Conference in Los Angeles, Magic Leap introduced an AR-theater hybrid called Mary and the Monster, wherein viewers watched a barren “diorama-like stage” come to life in AR.

Source: Venture Beat.
While audience members shared the common experience like a traditional play, individuals could also zoom in on specific actors to observe their expressions more closely.
Say goodbye to opera glasses and hello to AR headsets.
Final Thoughts
While AR headset manufacturers and mixed reality developers race to build enterprise solutions from manufacturing to transportation, AR’s use in consumer products is following close behind.
Magic Leap leads the way in developing consumer experiences we’ve long been waiting for, as the “Magicverse” of localized AR displays in shared physical spaces will reinvent our modes of connection.
And as AR-supportive hardware is now built into today’s newest smartphones, businesses have an invaluable opportunity to gamify products and immerse millions of consumers in service-related AR experiences.
Even beyond the most obvious first-order AR business cases, new industries to support the augmented world of 2030 will soon surge in market competition, whether headset hardware, data storage solutions, sensors, or holograph and projection technologies.
Jump on the bandwagon now— the future is faster than you think!

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #AR #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
How AR, AI, Sensors & Blockchain are Merging Into Web 3.0
How AR, AI, Sensors & Blockchain are Merging Into Web 3.0

How each of us sees the world is about to change dramatically…
For all of human history, the experience of looking at the world was roughly the same for everyone. But boundaries between the digital and physical are beginning to fade.
The world around us is gaining layer upon layer of digitized, virtually overlaid information — making it rich, meaningful, and interactive. As a result, our respective experiences of the same environment are becoming vastly different, personalized to our goals, dreams, and desires.
Welcome to Web 3.0, aka The Spatial Web. In version 1.0, static documents and read-only interactions limited the internet to one-way exchanges. Web 2.0 provided quite an upgrade, introducing multimedia content, interactive web pages, and participatory social media. Yet, all this was still mediated by 2D screens.
And today, we are witnessing the rise of Web 3.0, riding the convergence of high-bandwidth 5G connectivity, rapidly evolving AR eyewear, an emerging trillion-sensor economy, and ultra-powerful AIs.
As a result, we will soon be able to superimpose digital information atop any physical surrounding—freeing our eyes from the tyranny of the screen, immersing us in smart environments, and making our world endlessly dynamic.
In this third blog of our five-part series on augmented reality, we will explore the convergence between AR, AI, sensors, and blockchain, diving into the implications through a key use case in manufacturing.
A Tale of Convergence
Let’s deconstruct everything beneath the sleek AR display.
It all begins with Graphics Processing Units (GPUs) — electric circuits that perform rapid calculations to render images. (GPUs can be found in mobile phones, game consoles, and computers.)
However, because AR requires such extensive computing power, single GPUs will not suffice. Instead, blockchain can now enable distributed GPU processing power, and blockchains specifically dedicated to AR holographic processing are on the rise.
Next up, cameras and sensors will aggregate real-time data from any environment to seamlessly integrate physical and virtual worlds. Meanwhile, body-tracking sensors are critical for aligning a user’s self-rendering in AR with a virtually enhanced environment. Depth sensors then provide data for 3D spatial maps, while cameras absorb more surface-level, detailed visual input. In some cases, sensors might even collect biometric data, such as heart rate and brain activity, to incorporate health-related feedback in our everyday AR interfaces and personal recommendation engines.
The next step in the pipeline involves none other than AI. Processing enormous volumes of data instantaneously, embedded AI algorithms will power customized AR experiences in everything from artistic virtual overlays to personalized dietary annotations.
In retail, AIs will use your purchasing history, current closet inventory, and possibly even mood indicators to display digitally rendered items most suitable for your wardrobe, tailored to your measurements.
In healthcare, smart AR glasses will provide physicians with immediately accessible and maximally relevant information (parsed from the entirety of a patient’s medical records and current research) to aid in accurate diagnoses and treatments, freeing doctors to engage in the more human-centric tasks of establishing trust, educating patients and demonstrating empathy.
Convergence in Manufacturing
One of the nearest-term use cases of AR is manufacturing, as large producers begin dedicating capital to enterprise AR headsets. And over the next ten years, AR will converge with AI, sensors, and blockchain to multiply manufacturer productivity and employee experience.
(1) Convergence with AI
In initial application, digital guides superimposed on production tables will vastly improve employee accuracy and speed, while minimizing error rates.
Already, the International Air Transport Association (IATA) — whose airlines supply 82 percent of air travel — recently implemented industrial tech company Atheer’s AR headsets in cargo management. And with barely any delay, IATA reported a whopping 30 percent improvement in cargo handling speed and no less than a 90 percent reduction in errors.
With similar success rates, Boeing brought Skylight’s smart AR glasses to the runway, now used in the manufacturing of hundreds of airplanes. Sure enough—the aerospace giant has now seen a 25 percent drop in production time and near-zero error rates.
Beyond cargo management and air travel, however, smart AR headsets will also enable on-the-job training without reducing the productivity of other workers or sacrificing hardware. Jaguar Land Rover, for instance, implemented Bosch’s Re’flekt One AR solution to gear technicians with “x-ray” vision: allowing them to visualize the insides of Range Rover Sport vehicles without removing any dashboards.
And as enterprise capabilities continue to soar, AIs will soon become the go-to experts, offering support to manufacturers in need of assembly assistance. Instant guidance and real-time feedback will dramatically reduce production downtime, boost overall output, and even help customers struggling with DIY assembly at home.
Perhaps one of the most profitable business opportunities, AR guidance through centralized AI systems will also serve to mitigate supply chain inefficiencies at extraordinary scale. Coordinating moving parts, eliminating the need for manned scanners at each checkpoint, and directing traffic within warehouses, joint AI-AR systems will vastly improve workflow while overseeing quality assurance.
After its initial implementation of AR “vision picking” in 2015, leading courier company DHL recently announced it would continue to use Google’s newest smart lens in warehouses across the world. Motivated by the initial group’s reported 15 percent jump in productivity, DHL’s decision is part of the logistics giant’s $300 million investment in new technologies.
And as direct-to-consumer e-commerce fundamentally transforms the retail sector, supply chain optimization will only grow increasingly vital. AR could very well prove the definitive step for gaining a competitive edge in delivery speeds.
As explained by Vital Enterprises CEO Ash Eldritch, “All these technologies that are coming together around artificial intelligence are going to augment the capabilities of the worker and that’s very powerful. I call it Augmented Intelligence. The idea is that you can take someone of a certain skill level and by augmenting them with artificial intelligence via augmented reality and the Internet of Things, you can elevate the skill level of that worker.”
Already, large producers like Goodyear, thyssenkrupp, and Johnson Controls are using the Microsoft HoloLens 2—priced at $3,500 per headset—for manufacturing and design purposes.
Perhaps the most heartening outcome of the AI-AR convergence is that, rather than replacing humans in manufacturing, AR is an ideal interface for human collaboration with AI. And as AI merges with human capital, prepare to see exponential improvements in productivity, professional training, and product quality.
(2) Convergence with Sensors
On the hardware front, these AI-AR systems will require a mass proliferation of sensors to detect the external environment and apply computer vision in AI decision-making.
To measure depth, for instance, some scanning depth sensors project a structured pattern of infrared light dots onto a scene, detecting and analyzing reflected light to generate 3D maps of the environment. Stereoscopic imaging, using two lenses, has also been commonly used for depth measurements. But leading technology like Microsoft’s HoloLens 2 and Intel’s RealSense 400-series camera implement a new method called “phased time-of-flight” (ToF).
In ToF sensing, the HoloLens 2 uses numerous lasers, each with 100 milliwatts (mW) of power, in quick bursts. The distance between nearby objects and the headset wearer is then measured by the amount of light in the return beam that has shifted from the original signal. Finally, the phase difference reveals the location of each object within the field of view, which enables accurate hand-tracking and surface reconstruction.
With a far lower computing power requirement, the phased ToF sensor is also more durable than stereoscopic sensing, which relies on the precise alignment of two prisms. The phased ToF sensor’s silicon base also makes it easily mass-produced, rendering the HoloLens 2 a far better candidate for widespread consumer adoption.
To apply inertial measurement—typically used in airplanes and spacecraft—the HoloLens 2 additionally uses a built-in accelerometer, gyroscope, and magnetometer. Further equipped with four “environment understanding cameras” that track head movements, the headset also uses a 2.4MP HD photographic video camera and ambient light sensor that work in concert to enable advanced computer vision.
For natural viewing experiences, sensor-supplied gaze tracking increasingly creates depth in digital displays. Nvidia’s work on Foveated AR Display, for instance, brings the primary foveal area into focus, while peripheral regions fall into a softer background— mimicking natural visual perception and concentrating computing power on the area that needs it most.
Gaze tracking sensors are also slated to grant users control over their (now immersive) screens without any hand gestures. Conducting simple visual cues, even staring at an object for more than three seconds, will activate commands instantaneously.
And our manufacturing example above is not the only one. Stacked convergence of blockchain, sensors, AI and AR will disrupt almost every major industry.
Take healthcare, for example, wherein biometric sensors will soon customize users’ AR experiences. Already, MIT Media Lab’s Deep Reality group has created an underwater VR relaxation experience that responds to real-time brain activity detected by a modified version of the Muse EEG. The experience even adapts to users’ biometric data, from heart rate to electro dermal activity (inputted from an Empatica E4 wristband).
Now rapidly dematerializing, sensors will converge with AR to improve physical-digital surface integration, intuitive hand and eye controls, and an increasingly personalized augmented world. Keep an eye on companies like MicroVision, now making tremendous leaps in sensor technology.
While I’ll be doing a deep dive into sensor applications across each industry in our next blog, it’s critical to first discuss how we might power sensor- and AI-driven augmented worlds.
(3) Convergence with Blockchain
Because AR requires much more compute power than typical 2D experiences, centralized GPUs and cloud computing systems are hard at work to provide the necessary infrastructure. Nonetheless, the workload is taxing and blockchain may prove the best solution.
A major player in this pursuit, Otoy aims to create the largest distributed GPU network in the world, called the Render Network RNDR. Built specifically on the Ethereum blockchain for holographic media, and undergoing Beta testing, this network is set to revolutionize AR deployment accessibility.
Alphabet Chairman Eric Schmidt (an investor in Otoy’s network), has even said, “I predicted that 90% of computing would eventually reside in the web based cloud… Otoy has created a remarkable technology which moves that last 10%—high-end graphics processing—entirely to the cloud. This is a disruptive and important achievement. In my view, it marks the tipping point where the web replaces the PC as the dominant computing platform of the future.”
Leveraging the crowd, RNDR allows anyone with a GPU to contribute their power to the network for a commission of up to $300 a month in RNDR tokens. These can then be redeemed in cash or used to create users’ own AR content.
In a double win, Otoy’s blockchain network and similar iterations not only allow designers to profit when not using their GPUs, but also democratize the experience for newer artists in the field.
And beyond these networks’ power suppliers, distributing GPU processing power will allow more manufacturing companies to access AR design tools and customize learning experiences. By further dispersing content creation across a broad network of individuals, blockchain also has the valuable potential to boost AR hardware investment across a number of industry beneficiaries.
On the consumer side, startups like Scanetchain are also entering the blockchain-AR space for a different reason. Allowing users to scan items with their smartphone, Scanetchain’s app provides access to a trove of information, from manufacturer and price, to origin and shipping details.
Based on NEM (a peer-to-peer cryptocurrency that implements a blockchain consensus algorithm), the app aims to make information far more accessible and, in the process, create a social network of purchasing behavior. Users earn tokens by watching ads, and all transactions are hashed into blocks and securely recorded.
The writing is on the wall—our future of brick-and-mortar retail will largely lean on blockchain to create the necessary digital links.
Final Thoughts
Integrating AI into AR creates an “auto-magical” manufacturing pipeline that will fundamentally transform the industry, cutting down on marginal costs, reducing inefficiencies and waste, and maximizing employee productivity.
Bolstering the AI-AR convergence, sensor technology is already blurring the boundaries between our augmented and physical worlds, soon to be near-undetectable. While intuitive hand and eye motions dictate commands in a hands-free interface, biometric data is poised to customize each AR experience to be far more in touch with our mental and physical health.
And underpinning it all, distributed computing power with blockchain networks like RNDR will democratize AR, boosting global consumer adoption at plummeting price points.
As AR soars in importance—whether in retail, manufacturing, entertainment, or beyond—the stacked convergence discussed above merits significant investment over the next decade. Already, 52 Fortune 500 companies have begun testing and deploying AR/VR technology. And while global revenue from AR/VR stood at $5.2 billion in 2016, market intelligence firm IDC predicts the market will exceed $162 billion in value by 2020.
The augmented world is only just getting started.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #d #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
Augmented Reality is about to add a digital intelligence layer
Augmented Reality is about to add a digital intelligence layer

Augmented Reality is about to add a digital intelligence layer to our every surrounding, transforming retail, manufacturing, education, tourism, real estate, and almost every major industry that holds up our economy today.
Just last year, the global VR/AR market hit a value of $814.7 billion, and it is only expected to continue surging at a 63 percent CAGR until 2025.
Apple’s Tim Cook has remarked, “I regard [AR] as a big idea like the smartphone […] The smartphone is for everyone. We don’t have to think the iPhone is about a certain demographic, or country, or vertical market. It’s for everyone. I think AR is that big, it’s huge.”
And as Apple, Microsoft, Alphabet, and numerous other players begin entering the AR market, we are on the cusp of witnessing a newly augmented world.
In one of the greatest technological revolutions of this century, smartphones dematerialized cameras, stereos, video game consoles, TVs, GPS systems, calculators, paper, and even matchmaking as we knew it.
AR glasses will soon perpetuate this, ultimately dematerializing the smartphone itself. We will no longer gaze into tiny, two-dimensional screens but rather see through a fully immersive, 3D interface.
While already beginning to permeate mobile applications, AR will soon migrate to headsets, and eventually reach us through contact lenses — replacing over 3 billion smartphones in use today.
I am immensely excited about this five-part AR blog series. In it, we will cover:
- Importance of AR as an emerging technology
- Leading AR hardware
- AR convergence with AI, blockchain, and sensors
- Industry-specific applications
- Broader implications of the AR Cloud
Let’s dive in!
Introducing the Augmented World
AR superimposes digital worlds onto physical environments (by contrast to VR, which completely immerses users in digital realities). In this way, AR allows users to remain engaged with their physical surroundings, serving as a visual enhancement rather than replacement.
As AR hardware costs continue to plummet — and advancements in connectivity begin enabling low-latency, high-resolution rendering — today’s AR producers are initially targeting businesses through countless enterprise applications.
And while AR headsets remain too pricey for widespread consumer adoption, distribution is fast increasing. Roughly 150,000 headsets were shipped in 2016, and this number is expected to reach 22.8 million by 2022.
Meanwhile, AR app development has skyrocketed, allowing smartphone users to sample rudimentary levels of the technology through numerous mobile applications. Already, over 1 billion people across the globe use mobile AR, and a majority of mobile AR integrations involve social media (84%) and e-commerce (41%).
Yet while well-known players like Microsoft, Apple, Alphabet, Qualcomm, Samsung, NVIDIA, and Intel have made tremendous strides, well-funded startups remain competitive.
Magic Leap, a company aiming to eliminate the screen altogether, has raised a total of $2.6 billion since its founding in 2010. With its own head-mounted virtual retinal display, Magic Leap projects a digital light field into users’ eyes to superimpose 3D computer-generated imagery over set environments, whether social avatars, news broadcasts or interactive games.
Mojo Vision, in its own right, has raised $108 million in its efforts to develop and produce an AR contact lens. Or take Samsung’s recently granted U.S. patent to develop smart lenses capable of streaming text, capturing videos, and even beaming images directly into a wearer’s eyes. Given their multi-layered lens architecture, the contacts are even designed to include a motion sensor (for eye movement tracking), hidden camera, and display unit.
And as of this writing, nearly 1,800 different AR startups populate the crowdfunding site Angel’s List.
While AR isn’t (yet) as democratized as VR, $100 will get you an entry-level Leap Motion headset, while a top-of-the-line Microsoft HoloLens 2 remains priced at $3,500. However, heads-up-displays in luxury automobiles — arguably the first AR applications to go mainstream — will soon become a standard commodity in economy models.
And as corporate partnerships with AR startups grow increasingly common, the convergence of augmented reality with sensors, networks, and IoT will transform almost every industry imaginable.
A Taste of Industry Transformations
Over the next few weeks of blogs, we will do a deeper dive into each industry, but it is worth considering some of AR’s most notable implications across a range of sectors.
In Manufacturing & Industry, AR training simulations are already beginning to teach us how to operate numerous machines and equipment, even to fly planes. Microsoft, for instance, is targeting enterprise clients with its HoloLens 2, as the AR device’s Remote Assist function allows workers to call in virtual guidance if unfamiliar problems arise in the manufacturing process.
Healthcare: AR will allow surgeons to “see inside” clogged arteries, provide precise incision guides, or flag potential risks, introducing seamless efficiency in everything from reconstructive surgeries to meticulous tumor removals. Medical students will use AR to peel back layers on virtual cadavers. And in everyday health, we will soon track nearly every health and performance metric — whether heart rate, blood pressure, or nutritional data — through AR lenses (as opposed to wearables).
Education: In our classrooms, AR will allow children (and adults alike!) to explore both virtual objects and virtual worlds. But beyond the classroom, we will have the option to employ AR as a private teacher wherever we go. Buildings will project their history into our field of view. Museums might have AR-enhanced displays. Every pond and park will double as a virtual-overlaid lesson in biology and ecology. Or teach your children the value of money with virtual budgeting and mathematical tabulations at grocery and department stores. Already, apps like Sky Map and Google Translate allow users to learn about their surroundings through smartphone camera lenses, and AR’s teaching capabilities are only on the rise.
Yet Retail & Advertising take AR’s transformative potential to a new level. Hungry and on a budget? Your smart AR contact lenses might show you all available lunch specials on the block, cross-referenced with real-time customer ratings, special deals, and your own health data for individualized recommendations. Storefront windows will morph to display your personalized clothing preferences, continuously tracked by AI, as eye-tracking technology allows your AR lenses to project every garment that grabs your attention onto your form, in your size. Smart AR advertising — if enabled — will target your every unique preference, transparently informing you of comparable, cheaper options the minute you reach for an item.
And in Entertainment, we will soon be able to toggle into imaginary realities, or even customize physical spaces with our own designs. 3D creations will become intuitive and shareable. Sports player stats will be superimposed onto live sporting events, as spectators recreate immersive stadiums with front-row seats in their own backyards. Turn on game mode, and every streetside, park, store, and neighborhood merges into a virtually overlaid game, socially interactive and interspersed with everyday life.
In Transportation, AR displays integrated in vehicle windows will allow users to access real-time information about the restaurants, stores, and landmarks they pass. Walking, biking, and driving directions will be embedded in our routes through AR. And when sitting in your autonomous vehicle-turned office on the way to work, AR will have the power to convert any vessel into a virtual haven of your choice.
A Day in the Life of 2030
Reaching for your AR-enabled glasses upon waking up, your Jarvis-like AI populates your visual field with any new updates and personalized notifications.
You begin the day with a new pancake recipe, directed seamlessly by a cooking app in your AR glasses, with ingredients tailored to new programmed dietary preferences. Glancing at your plate, your glasses inform you of the meal’s nutritional value, tracking these metrics in your health monitor.
As you need to fly cross-country today, your AI hails an autonomous shuttle to the airport. Along the way, you switch your glasses to creation mode, allowing you to populate entire swaths of the city with various art pieces your friends have created in the virtual world. Dropping a few of your own 3D designs across the city, your AR glasses even allow you to turn the vehicle floor into a virtual pond as you glide along a smart highway (equipped for electric vehicle charging).
Upon arriving at the airport, your AR glasses switch gears to navigation mode, displaying arrows that direct you seamlessly to your boarding gate.
Walking into your hotel, you activate tourist mode, offering a number of facts and relevant figures about nearby historical buildings and monuments. Toggle to restaurant mode for a look at nearby eatery reviews, tailored to the colleagues you’ll be dining with.
Winding down, you briefly scroll through some pictures captured with your glasses throughout the day, sharing them with family through an interface completely controlled via eye movements.
Welcome to the augmented world of 2030.
Final Thoughts
While enterprises are fueling initial deployment of AR headsets for employee training and professional retooling, widespread consumer adoption is fast reaching the horizon. And as hardware and connectivity skyrocket, driving down prices and democratizing access, sleek AR glasses — if not dematerialized lenses — will become an everyday given.
Advancements in cloud computing and 5G coverage are making AR products infinitely more scalable, ultra-fast, and transportable.
Yet ultimately, AR will give rise to neural architectures directly embedded through brain-computer interfaces. Our mode of interaction with the IoT will evolve from smartphone screens, to AR glasses, to contact lenses, to BCIs.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation virtualreality #vr #d #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #executive #success #work #follow #leadership #corporate #office #Biotech Cleantech #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
Smart Technology and Integration, How It’s Changing Our Lives
Smart Technology and Integration, How It’s Changing Our Lives
Each week alone, an estimated 1.3 million people move into cities, driving urbanization on an unstoppable scale.
By 2040, about two-thirds of the world’s population will be concentrated in urban centers. Over the decades ahead, 90 percent of this urban population growth is predicted to flourish across Asia and Africa.
Already, 1,000 smart city pilots are under construction or in their final urban planning stages across the globe, driving forward countless visions of the future.
As data becomes the gold of the 21st century, centralized databases and hyper-connected infrastructures will enable everything from sentient cities that respond to data inputs in real time, to smart public services that revolutionize modern governance.
Connecting countless industries — real estate, energy, sensors and networks, transportation, among others — tomorrow’s cities pose no end of creative possibilities and stand to completely transform the human experience.
In this blog, we’ll be taking a high-level tour of today’s cutting-edge urban enterprises involved in these three areas:
- Hyperconnected urban ecosystems that respond to your data
- Smart infrastructure and construction
- Self-charging green cities
Let’s dive in!
Smart Cities that Interact with Your Data
Any discussion of smart cities must also involve today’s most indispensable asset: data.
As 5G connection speeds, IoT-linked devices and sophisticated city AIs give birth to trillion-sensor economies, low latencies will soon allow vehicles to talk to each other and infrastructure systems to self-correct.
Even public transit may soon validate your identity with a mere glance in any direction, using facial recognition to charge you for individualized travel packages and distances.
As explained by Deloitte Public Sector Leader Clare Ma, “real-time information serves as the ‘eye’ for urban administration.”
In most cities today, data is fragmented across corporations, SMEs, public institutions, nonprofits, and personal databases, with little standardization.
Yet to identify and respond to urban trends, we need a way of aggregating multiple layers of data, spanning traffic flows, human movement, individual transactions, shifts in energy usage, security activity, and almost any major component of contemporary economies.
Only through real-time analysis of information flows can we leverage exponential technologies to automate public services, streamlined transit, smarter security, optimized urban planning and responsive infrastructure.
And already, cutting-edge cities across the globe are building centralized data platforms to combine different standards and extract actionable insights, from smart parking to waste management.
Take China’s Nanjing, for instance.
With sensors installed in 10,000 taxis, 7,000 buses and over 1 million private vehicles, the city aggregates daily data across both physical and virtual networks. After transmitting it to the Nanjing Information Center, experts can then analyze traffic data, send smartphone updates to commuters and ultimately create new traffic routes.
Replacing the need for capital-intensive road and public transit reconstruction, real-time data from physical transit networks allow governments to maximize value of preexisting assets, saving time and increasing productivity across millions of citizens.
But beyond traffic routing, proliferating sensors and urban IoT are giving rise to real-time monitoring of any infrastructural system.
Italy’s major rail operator Trenitalia has now installed sensors on all its trains, deriving real-time status updates on each train’s mechanical condition. Now capable of calculating maintenance predictions in advance of system failure, transit disruptions are becoming a thing of the past.
Los Angeles has embedded sensors in 4,500 miles worth of new LEDs (replacing previous streetlights). The minute one street bulb malfunctions or runs low, it can be fixed near-immediately, forming part of a proactive city model that detects glitches before they occur.
And Hangzhou, home to e-commerce giant Alibaba, has now launched a “City Brain” project, aiming to build out one of the most data-responsive cities on the planet.
With cameras and other sensors installed across the entire city, a centralized AI hub processes data on everything from road conditions to weather data to vehicular collisions and citizen health emergencies.

Overseeing a population of nearly 8 million residents, Hangzhou’s City Brain then manages traffic signals at 128 intersections (coordinating over 1,000 road signals simultaneously), tracks ambulances en-route and clears their paths to hospitals without risk of collision, directs traffic police to accidents at record rates, and even assists city officials in expedited decision-making. No more wasting time at a red light when there is obviously no cross traffic or pedestrians.
Already, the City Brain has cut ambulance and commuter traveling times by half. And as reported by China’s first AI-partnered traffic policeman Zheng Yijiong, “the City Brain can detect accidents within a second” allowing police to “arrive at [any] site [within] 5 minutes” across an urban area of over 3,000 square miles.
But beyond oversight of roads, traffic flows, collisions and the like, converging sensors and AI are now being used to monitor crowds and analyze human movement.
Companies like SenseTime now offer software to police bureaus that can not only identify live faces, individual gaits and car license plates, but even monitor crowd movement and detect unsafe pedestrian concentrations.
Some researchers have even posited the use of machine learning to predict population-level disease spread through crowd surveillance data, building actionable analyses from social media data, mass geolocation and urban sensors.
Yet aside from self-monitoring cities and urban AI ‘brains,’ what if infrastructure could heal itself on-demand. Forget sensors, connectivity and AI — enter materials science.
Self-Healing Infrastructure
The U.S. Department of Transportation estimates a $542.6 billion backlog needed for U.S. infrastructure repairs alone.
And as I’ve often said, the world’s most expensive problems are the world’s most profitable opportunities.
Enter self-healing construction materials.
First up, concrete.
In an effort to multiply the longevity of bridges, roads, and any number of infrastructural fortifications, engineers at Delft University have developed a prototype of bio-concrete that can repair its own cracks.
Mixed in with calcium lactate, the key ingredients of this novel ‘bio-concrete’ are minute capsules of limestone-producing bacteria distributed throughout any concrete structure. Only when the concrete cracks, letting in air and moisture, does the bacteria awaken.
Like clockwork, the bacteria begins feeding on surrounding calcium lactate as it produces a natural limestone sealant that can fill cracks in a mere three weeks — long before small crevices can even threaten structural integrity.
As head researcher Henk Jonkers explains, “What makes this limestone-producing bacteria so special is that they are able to survive in concrete for more than 200 years and come into play when the concrete is damaged. […] If cracks appear as a result of pressure on the concrete, the concrete will heal these cracks itself.”
Yet other researchers have sought to crack the code (no pun intended) of living concrete, testing everything from hydrogels that expand 10X or even 100X their original size when in contact with moisture, to fungal spores that grow and precipitate calcium carbonate the minute micro-cracks appear.
But bio-concrete is only the beginning of self-healing technologies.
As futurist architecture firms start printing plastic and carbon-fiber houses, engineers are tackling self-healing plastic that could change the game with economies of scale.
Plastic not only holds promise in real estate on Earth; it will also serve as a handy material in space. NASA engineers have pioneered a self-healing plastic that may prove vital in space missions, preventing habitat and ship ruptures in record speed.
The implications of self-healing materials are staggering, offering us resilient structures both on earth and in space.
One additional breakthrough worth noting involves the magic of graphene.
Perhaps among the greatest physics discoveries of the century, graphene is composed of a 2D honeycomb lattice over 200X stronger than steel, yet remains an ultra-thin one atom thick.
While yet to come down in cost, graphene unlocks an unprecedented host of possibilities, from weather-resistant and ultra-strong coatings for existing infrastructure, to multiplied infrastructural lifespans. Some have even posited graphene’s use in the construction of 30 km tall buildings.
And it doesn’t end there.
As biomaterials and novel polymers will soon allow future infrastructure to heal on its own, nano- and micro-materials are ushering in a new era of smart, super-strong and self-charging buildings.

Revolutionizing structural flexibility, carbon nanotubes are already dramatically increasing the strength-to-weight ratio of skyscrapers.
But imagine if we could engineer buildings that could charge themselves… or better yet, produce energy for entire cities, seamlessly feeding energy to the grid.
Self-Powering Cities
As exponential technologies across energy and water burst onto the scene, self-charging cities are becoming today’s testing ground for a slew of green infrastructure pilots, promising a future of self-sufficient societies.
In line with new materials, one hot pursuit surrounds the creation of commercializable solar power-generating windows.
In the past few years, several research teams have pioneered silicon nanoparticles to capture everyday light flowing through our windows. Little solar cells at the edges of windows then harvest this energy for ready use.
Scientists at Michigan State, for instance, have developed novel “solar concentrators.” Capable of being layered over any window, these solar concentrators leverage non-visible wavelengths of light — near infrared and ultraviolet — pushing them to those solar cells embedded at the edge of each window panel.
Rendered entirely invisible, such solar cells could generate energy on almost any sun-facing screen, from electronic gadgets to glass patio doors to reflective skyscrapers.
And beyond self-charging windows, countless future city pilots have staked ambitious goals for solar panel farms and renewable energy targets.
Take Dubai’s “Strategic Plan 2021,” for instance.
Touting a multi-decade Dubai Clean Energy Strategy, Dubai aims to gradually derive 75 percent of its energy from clean sources by 2050.
With plans to launch the largest single-site solar project on the planet by 2030, boasting a projected capacity of 5,000 megawatts, Dubai further aims to derive 25 percent of its energy needs from solar power in the next decade.
And in the city’s “Strategic Plan 2021,” Dubai aims to soon:
- 3D-print 25 percent of its buildings;
- Make 25 percent of transit automated and driverless;
- Install hundreds of artificial “trees,” all leveraging solar power and providing the city with free WiFi, info-mapping screens, and charging ports;
- Integrate passenger drones capable of carrying individuals to public transit systems;
- And drive forward countless designs of everything from underwater bio-desalination plants to smart meters and grids.

A global leader in green technologies and renewable energy, Dubai stands as a gleaming example that any environmental context can give rise to thriving and self-sufficient eco-powerhouses.
But Dubai is not alone, and others are quickly following suit.
Leading the pack of China’s 500 smart city pilots, Xiong’an New Area (near Beijing) aims to become a thriving economic zone powered by 100 percent clean electricity.
And just as of this December, 100 U.S. cities are committed and on their way to the same goal.
Cities as Living Organisms
As new materials forge ahead to create pliable and self-healing structures, green infrastructure technologies are exploding into a competitive marketplace.
Aided by plummeting costs, future cities will soon surround us with self-charging buildings, green city ecosystems, and urban residences that generate far more than they consume.
And as 5G communications networks, proliferating sensors and centralized AI hubs monitor and analyze every aspect of our urban environments, cities are fast becoming intelligent organisms, capable of seeing and responding to our data in real time.


Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #HR #executive #business #CXO #CEO #CFO #CIO #executive #success #work #follow #leadership #corporate #office #Biotech Cleantech #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
Delivering an amazing life breakthrough in your intelligence
Delivering an amazing life breakthrough in your intelligence
In the coming decade, we may soon begin connecting our brains to an AI.
Elon Musk’s company Neuralink just announced groundbreaking progress on its “Brain-Computer Interface” (BCI) technology, striving towards a 2 gigabit-per-second wireless connection between a patient’s brain and the cloud in the next few years.
Initial human trials are expected by the end of 2020. Long-term, Elon expects BCI installation to be as painless and simple as LASIK surgery (a thirty-minute visit, no stitches or general anesthesia required).
Over a decade ago, Ray Kurzweil predicted that our brains would seamlessly connect to the cloud by 2035. Even considering his 86% prediction accuracy rate, this prediction seemed somewhat ambitious. But Neuralink’s recent announcement adds significant credence to Ray’s prediction and timeline.
In the long-term, the implications of high-bandwidth BCI are extraordinary. Nothing is more important to a company, nation, or individual than intelligence. It is the fundamental key to problem-solving and wealth creation, and underpins the human capital that drives every company and nation forward.
BCIs will ultimately make the resource of human intelligence massively abundant.
In this blog, I’ll be exploring:
- Neuralink’s groundbreaking advancements;
- Roadmaps for BCI;
- Implications of human capital abundance & the future of intelligence.
Let’s plug in…
Neuralink Update
Beyond the pioneering technology itself, Neuralink has a compelling business plan.
The company’s brain implants, connected via Bluetooth to an external controller, are designed to first treat patients with cervical fractures and neurological disorders, allowing them to restore somewhat normal function. Long-term, they will be made available to the general population for enhanced capability, or to enable AI enhancement of our brain.
In the company’s first public announcement, Elon outlined three main goals of Neuralink’s device:
- Increase by orders of magnitude the number of neurons you can read from and write to in safe, long-lasting ways;
- At each stage, produce devices that serve critical unmet medical needs of patients;
- Make it as simple and automated as LASIK.
The three-pound organ within our skulls that we call the brain is composed of 100 billion neurons and 100 trillion synapses, encompassing everything we see, feel, hear, taste, and remember. Everything that makes me, me, and everything that makes you, you.
In the near-term, Neuralink aims to restore function to those patients who have suffered brain and spinal injuries, helping reinstate their ability to feel and regain motor agency. Beyond such use cases, however, Neuralink ultimately strives to achieve a full “symbiosis with AI,” according to Elon. He makes the important distinction, however, that merging with AI will be an option — not a requirement — in the future.
BCI devices will serve as the brain’s tertiary “digital superintelligence layer,” a layer we arguably already experience in the form of phones, laptops, wearables, and the like.
Yet as explained by Elon, “the constraint is how well you interface — the input and the output speeds. You have a very slow output speed, with typing on keys. Your input speed is faster due to vision.”
Neuralink will eradicate these barriers to speed, providing instantaneous, seamless access to an abundance of knowledge, processing power, and even sensory experience.
Understanding the Hardware
One breakthrough enabling Neuralink’s technology is the development of flexible electrode “threads” with a diameter measuring one-tenth the width of a human hair (4 – 6 μm in width, or the approximate width of a neuron). These can be inserted into the uppermost levels of the human cortex and interface (read & write) with neurons.
1,024 of these threads attach to a single small Neuralink chip (“N1”) that is embedded into the skull, just below your scalp. Each of the N1 chips collects and transmits 200Mbps of neural data, and up to 10 such chips (implanted into a patient) allow for the grand total of a 2Gbps wireless connection. The wireless connection is then made via Bluetooth to an ear-mounted device that connects this brain data to the cloud.
Enter an era wherein users can control their brain implants via an iPhone app. Or imagine the 2030 generation of iPhones (if iPhones are still around), revamped to include a separate App Store: Brain Edition.
Given the threads’ infinitesimal size, large number and flexibility, Neuralink had to developed a special purpose, high-precision robot to perform the thread insertion procedure.
Within the procedure, a mere 2mm incision in the scalp and skull is needed for each implant, small enough to be closed with crazy glue. Minimizing risk of brain trauma, the robot’s 24-micron needle is designed to precisely place threads and avoid damaging blood vessels. In initial quadriplegic patients, one array will reside in the somatosensory region of the brain and three in the motor cortex.
As summed up by lead Neuralink surgeon Dr. Matthew MacDougall, “We developed a robotic inserter that can rapidly and precisely insert hundreds of individual threads, representing thousands of distinct electrodes, into the cortex in under an hour.”
Progress in Neuralink’s labs has been fast and furious. Over the past two years, the size-to-performance ratio of Neuralink’s electrodes has improved seven-fold.
Recalling Ray Kurzweil’s prediction of high-speed BCI by 2035 (only 15 years from now), how far can the technology go in this short timeframe?
Well, let’s consider that if chip performance doubles every two years, we are about to witness a 128X improvement in the technology over the next 15 years.
For perspective, remember that the first-generation iPhone was only released in 2007 — just a dozen years ago — and look how far that technology has traveled!
Bolstered by converging exponential technologies, BCIs will undoubtedly experience massive transformation in the decade ahead.
But Neuralink is not alone….
While there are likely dozens of other top-secret BCI government ventures taking place in the U.S., China, and Russia, to name a few countries, here are some of the key players driving the industry in the U.S.:
(1) Kernel is currently working on a “noninvasive mind/body/machine interface (MBMI)” that will be able to receive signals from neurons in far greater numbers than the 100 neurons that current neuromodulators can stimulate.
Kernel’s CEO and my friend Bryan Johnson aims to initially use the neuroprosthetic to treat disorders such as Alzheimer’s, strokes, and concussions. Yet long-term, Johnson envisions the technology will also help humans keep up with the rapid advancement of computation.
(2) Facebook announced in 2017 its work on a noninvasive BCI that would integrate with the company’s augmented reality headset, providing a “brain click” function at the most basic level. According to Zuckerberg, the BCI can already distinguish if a user is thinking about an elephant or a giraffe, and it will ultimately be used for type-to-text communication.
“Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world—speech—can only transmit about the same amount of data as a 1980s modem. We’re working on a system that will let you type straight from your brain about 5X faster than you can type on your phone today,” as explained by Zuckerberg in a post.
(3) CTRL-Labs, a startup founded by the creator of Microsoft Internet Explorer Thomas Reardon and his partners, is now developing a BCI moderated through a wristband that detects voltage pulses from muscle contractions.
The group aims to eventually detect individual clusters of muscle cells so that users can link imperceptible movements to a variety of commands.
(4) One of the earliest BCI benefactors, DARPA has funded BCI research since the 1970s, aiming to use the technology in recovery and enhancement. Yet recent advancements remain under wraps.
(5) While most of the invasive BCI technologies mentioned here await human trials, BrainGate has already demonstrated success in humans. In one iteration of their technology, researchers implanted 1 – 2 electrodes in the brains of three paralyzed patients. The implants allowed all three to move a cursor on a screen by simply thinking about moving their hands. One participant even recorded eight words per minute.
This astounding feat, possible with just two electrodes, suggests tremendous promise for the thousands of electrodes that Elon plans to achieve in Neuralink’s devices. While FDA approval for human trials will likely take time (Neuralink has primarily tested their technology in mice and a few monkeys), use in human therapeutics is now finally on the horizon.
How much time?
Financial analysts forecast a $27 billion market for neural devices within the next six years. Elon anticipates reaching human trials by the end of next year. And by 2035, the technology is set to achieve low-cost, widespread adoption.
Neuralink’s high-bandwidth brain connection will exponentially transform information accessibility. Thought-to-speech technology will allow us to control avatars — both digital and robotic — directly with our minds.
We will not only upload photos and conversations to the cloud, but entire memories, ideas, and abstract thought. Say goodbye to Google search and 2D screen-confined engines as we adapt to querying directly from our brains.
And for those of you worried about Terminator-like scenarios of AI’s destruction of the human race, BCI will offer us the potential to join tomorrow’s intelligence revolution, rather than be crushed by it.
Closing Thoughts…
Every human today is composed of ~40 trillion cells that all function together in a collaborative fashion, constituting you, me, and every person alive.
One of the most profound and long-term implications of BCI is its ability to interconnect all of our minds. To share our thoughts, memories, and actions across all of humanity.
Imagine just for a moment: a future society in which each of us are connected to the cloud through high-bandwidth BCI, allowing the unfiltered sharing of feelings, memories and thoughts.
Imagine a kinder and gentler version of the Borg (from Star Trek), allowing the linking of 8 billion minds via the cloud and reaching a state of transformative human intelligence.
For those concerned about the domination of AI (i.e. the Terminator scenario), take some comfort in the notion that it isn’t AI versus humans alone. A new version of Human Augmented Intelligence (HI) is just around the corner.
Our evolution from screens to augmented reality glasses to brain-computer interfaces is already beginning. Prepare for the accelerating pace of groundbreaking HI.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
artificialintelligence #AI #innovation #HR #executive #business #CXO #CEO #CFO #CIO #executive #success #work #follow #leadership #corporate #office #luxury #entrepreneur #coaching #businessman #professional #aviation #excellence #development Contributor: Peter Diamandis #motivation #InvestmentCapitalGrowth
The Future of Entertainment. I think you’ll be surprised!
The Future of Entertainment. I think you’ll be surprised!
Twenty years ago, entertainment was dominated by a handful of producers and monolithic broadcasters, a near-impossible market to break into. Today, the industry is almost entirely dematerialized, while storytellers and storytelling mediums explode in number. And this is just the beginning.
Netflix turned entertainment on its head practically overnight, shooting from a market cap of US$8 billion in 2010 (the same year Blockbuster filed for bankruptcy) to a record US$185.6 billion only 8 years later. This year, it is expected to spend a whopping 15 billion on content alone.
Meanwhile, VR platforms like Google’s Daydream and Oculus have only begun bringing the action to you, while mixed reality players like Dreamscape will forever change the way we experience stories, exotic environments and even classrooms of the future.
In the words of Barry Diller, a former Fox and Paramount executive and the chairman of IAC, “Hollywood is now irrelevant.”
In this two-part series, I’ll be diving into three future trends in the entertainment industry: AI-based content curation, participatory story-building, and immersive VR/AR/MR worlds.
Today, I’ll be exploring the creative future of AI’s role in generating on-demand, customized content and collaborating with creatives, from music to film, in refining their craft.
Let’s dive in!
AI Entertainment Assistants
For many of us, film brought to life our conceptions of AI, from Marvel’s JARVIS to HAL in 2001: A Space Odyssey.
And now, over 50 years later, AI is bringing stories to life like we’ve never seen before.
Converging with the rise of virtual reality and colossal virtual worlds, AI has begun to create vastly detailed renderings of dead stars, generate complex supporting characters with intricate story arcs, and even bring your favorite stars — whether Marlon Brando or Amy Winehouse — back to the big screen and into a built environment.
While still in its nascent stages, AI has already been used to embody virtual avatars that you can converse with in VR, soon to be customized to your individual preferences.
But AI will have far more than one role in the future of entertainment as industries converge atop this fast-moving arena.
You’ve likely already seen the results of complex algorithms that predict the precise percentage likelihood you’ll enjoy a given movie or TV series on Netflix, or recommendation algorithms that queue up your next video on YouTube. Or think Spotify playlists that build out an algorithmically refined, personalized roster of your soon-to-be favorite songs.
And AI entertainment assistants have barely gotten started.
Currently the aim of AIs like Google’s Assistant or Huawei’s Xiaoyi (a voice assistant that lives inside Huawei’s smartphones and smart speaker AI Cube), AI advancements will soon enable your assistant to search and select songs based on your current and desired mood, movies carefully picked out to bridge you and your friends’ watching preferences on a group film night, or even games whose characters are personalized to interact with you as you jump from level to level.
Or even imagine your own home leveraging facial technology to assess your disposition, cross-reference historical data on your entertainment choices at a given time or frame of mind, and automatically queue up a context-suiting song or situation-specific video for comic relief.
Curated Content Generators
Beyond personalized predictions, however, AIs are now taking on content generation, multiplying your music repertoire, developing entirely new plotlines, and even bringing your favorite actors back to the screen or — better yet — directly into your living room.
Take AI motion transfer, for instance.
Employing the machine learning subset of generative adversarial networks (GAN), a team of researchers at UC Berkeley has now developed an AI motion transfer technique that superimposes the dance moves of professionals onto any amateur (‘target’) individual in seamless video.
By first mapping the target’s movements onto a stick figure, Caroline Chan and her team create a database of frames, each frame associated with a stick-figure pose. They then use this database to train a GAN and thereby generate an image of the target person based on a given stick-figure pose.
Map a series of poses from the source video to the target, frame-by-frame, and soon anyone might moonwalk like Michael Jackson, glide like Ginger Rogers or join legendary dancers on a virtual stage.
Somewhat reminiscent of AI-generated “deepfakes,” the use of generative adversarial networks in film could massively disrupt entertainment, bringing legendary performers back to the screen and granting anyone virtual stardom.
Just as digital artists increasingly enhance computer-generated imagery (CGI) techniques with high-fidelity 3D scanning for unprecedentedly accurate rendition of everything from pores to lifelike hair textures, AI is about to give CGI a major upgrade.
Fed countless hours of footage, AI systems can be trained to refine facial movements and expressions, replicating them on any CGI model of a character, whether a newly generated face or iterations of your favorite actors.
Want Marilyn Monroe to star in a newly created Fast and Furious film? No problem! Keen to cast your brother in one of the original Star Wars movies? It might soon be as easy as contracting an AI to edit him in, ready for his next Jedi-themed birthday.
Companies like Digital Domain, co-founded by James Cameron, are hard at work to pave the way for such a future. Already, Digital Domain’s visual effects artists employ proprietary AI systems to integrate humans into CGI character design with unparalleled efficiency.
As explained by Digital Domain’s Digital Human Group director Darren Handler, “We can actually take actors’ performances — and especially facial performances — and transfer them [exactly] to digital characters.
And this weekend, AI-CGI cooperation took center stage in Avengers: Endgame, seamlessly recreating facial expressions on its villain Thanos.
Even in the realm of video games, upscaling algorithms have been used to revive childhood classic video games, upgrading low-resolution features with striking new graphics.
One company that has begun commercializing AI upscaling techniques is Topaz Labs. While some manual craftsmanship is required, the use of GANs has dramatically sped up the process, promising extraordinary implications for gaming visuals.
But how do these GANs work? After training a GAN on millions of pairs of low-res and high-res images, one part of the algorithm attempts to build a high-resolution frame from its low-resolution counterpart, while the second algorithm component evaluates this output. And as the feedback loop of generation and evaluation drives the GAN’s improvement, the upscaling process only gets more efficient over time.
“After it’s seen these millions of photos many, many times it starts to learn what a high resolution image looks like when it sees a low resolution image,” explained Topaz Labs CTO Albert Yang.
Imagine a future in which we might transform any low-resolution film or image with remarkable detail at the click of a button.
But it isn’t just film and gaming that are getting an AI upgrade. AI songwriters are now making a major dent in the music industry, from personalized repertoires to melody creation.
AI Songwriters and Creative Collaborators
While not seeking to replace your favorite song artists, AI startups are leaping onto the music scene, raising millions in VC investments to assist musicians with creation of novel melodies and underlying beats… and perhaps one day with lyrics themselves.
Take Flow Machines, a songwriting algorithm already in commission. Now used by numerous musical artists as a creative assistant, Flow Machines has even made appearances on Spotify playlists and top music charts.
And startups are fast following suit, including Amper, Popgun, Jukedeck and Amadeus Code.
But how do these algorithms work? By processing thousands of genre-specific songs or an artist’s genre-mixed playlist, songwriting algorithms are now capable of optimizing and outputting custom melodies and chord progressions that interpret a given style. These in turn help human artists refine tunes, derive new beats, and ramp up creative ability at scales previously unimaginable.
As explained by Amadeus Code’s founder Taishi Fukuyama, “History teaches us that emerging technology in music leads to an explosion of art. For AI songwriting, I believe [it’s just] a matter of time before the right creators congregate around it to make the next cultural explosion.”
Envisioning a future wherein machines form part of the creation process, Will.i.am has even described a scenario in which he might tell his AI songwriting assistant, “Give me a shuffle pattern, and pull up a bass line, and give me a Bootsy Collins feel…”
AI: The Next Revolution in Creativity
Over the next decade, entertainment will undergo its greatest revolution yet. As AI converges with VR and crashes into democratized digital platforms, we will soon witness the rise of everything from edu-tainment, to interactive game-based storytelling, to immersive worlds, to AI characters and plot lines created on-demand, anywhere, for anyone, at almost zero cost.
We’ve already seen the dramatic dematerialization of entertainment. Streaming has taken the world by storm, as democratized platforms and new broadcasting tools birth new convergence between entertainment and countless other industries.
Posing the next major disruption, AI is skyrocketing to new heights of creative and artistic capacity, multiplying content output and allowing any artist to refine their craft, regardless of funding, agencies or record deals.
And as AI advancements pick up content generation and facilitate creative processes on the back end, virtual worlds and AR/VR hardware will transform our experience of content on the front-end.

In our next blog of the series, we’ll dive into mixed reality experiences, VR for collaborative storytelling, and AR interfaces that bring location-based entertainment to your immediate environment.
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
Contributor: Peter Diamandis #innovation #invent #inventor #engineer #Entrepreneur #AI #ArtificialIntelligence #VC #WSJ
Networked Vehicles Will Allow for Automated Megacities
Networked Vehicles Will Allow for Automated Megacities

Tomorrow’s cities are reshaping almost every industry imaginable, and birthing those we’ve never heard of.
Riding an explosion of sensors, megacity AI ‘brains,’ high-speed networks, new materials and breakthrough green solutions, cities are quickly becoming versatile organisms, sustaining and responding to the livelihood patterns of millions.
Over the next decade, cities will revolutionize everything about the way we live, travel, eat, work, learn, stay healthy, and even hydrate.
And countless urban centers, companies, and visionaries are already building out decades-long visions of the future.
Setting its sights on self-sustaining green cities, the UAE has invested record sums in its Vision 2021 plan, while sub-initiatives like Smart Dubai 2021 charge ahead with AI-geared government services, driverless car networks and desalination plants.
A trailblazer of smart governance, Estonia has leveraged blockchain, AI and ultra-high connection speeds to build a new generation of technological statecraft.
And city states like Singapore have used complex computational models to optimize everything from rainwater capture networks to urban planning, down to the routing of its ocean breeze.
While not given nearly enough credit, the personal vehicle and urban transportation stand at the core of shaping our future cities.
Yet today, your car remains an unused asset about 95 percent of the time.
In highly dense cities like Los Angeles, parking gobbles up almost 15 percent of all urban land area.
And with a whopping economic footprint, today’s global auto insurance market stands at over $200 billion.
But the personal vehicle model is on the verge of sweeping disruptions, and tomorrow’s cities will transform right along with it.
Already, driverless cars pose game-changing second-order implications for the next decade.
Take land use, for instance. By 2035, parking spaces are expected to decline by 5.7 million square meters, a boon for densely packed cities where real estate is worth its area in gold.
Beyond sheer land, a 90 percent driverless car penetration rate could result in $447 billion of projected savings and productivity gains.
But what do autonomous vehicles mean for city planning?


Let’s imagine a 100 percent autonomous vehicle (AV) penetration rate. Cars have reached Level-5 automation, are 100 percent self-driving and can now communicate seamlessly with each other.
With a packing density 8X what it is today in most cities, commutes now take a fraction of the time. Some have even predicted aggregate time savings of over 2.7 billion unproductive hours.
But time savings aside, cars can now be entirely reimagined, serving a dual purpose for sleep, office work, morning calls, time with your kids, you name it.
With plummeting commute times and functional vehicles (think: a mobile office, bed, or social space), cities need no longer be geographically concentrated, allowing you to live well outside the bounds of a business district.
And as AVs give rise to an on-demand, Cars-as-a-Service (CaaS) business model, urban sprawl will enable the flourishing of megacities on an unprecedented scale.
While architects and civil engineers leap to the scene, others are already building out smart network precursors for a future of decentralized vehicles.
Using Narrowband-IoT (NB-IoT) for low power consumption, Huawei has recently launched a smart parking network in Shanghai that finds nearby parking spots for users on the go, allowing passengers to book and pay via smartphone in record time.
In the near future, however, vehicles — not drivers — will book vertically stacked parking spots and charge CaaS suppliers on their own (for storage).
This is where 5G networks come in, driving down latencies between driverless cars, as well as between AVs and their CaaS providers. Using sensor suites and advanced AI, vehicles will make smart transactions in real-time, charging consumers by the minute or mile, notifying manufacturers of wear-and-tear or suboptimal conditions, and even billing for insurance dollars in the now highly unlikely case of a fender-bender.

With an eye to the future, cellular equipment manufacturers are building out the critical infrastructure for these and similar capabilities, embedding chip-sets under parking spaces across Shanghai, each collating and transmitting real-time data on occupancy rates, as the company ramps up its 5G networks.
And Huawei is not alone.
Building out a similar solution is China Unicom, whose smart city projects span the gamut from smart rivers that communicate details of environmental pollution, to IoT and AI-geared drones in agriculture.
Already, China Unicom has established critical communications infrastructure with an NB-IoT network that spans over 300 Chinese cities, additionally deploying eMTC, a lower power wide area technology that leverages existing LTE base stations for IoT support.
Beyond its mobile carriers, however, China has brought together four key private sector players to drive the world’s largest coordinated smart city initiative yet. Announced just last August at China’s Smart City International Expo, the official partnership knights a true power team, composed of Ping An, Alibaba, Tencent, and Huawei (PATH).
With 500 cities under their purview, these tech giants are each tackling a piece of the puzzle.
On the heels of over ten years of research and 50 billion RMB (over US$7.4 billion), Chinese insurance giant Ping An released a white paper addressing smart city strategies across blockchain, biometrics, AI and cloud computing.
Meanwhile, Alibaba plans to embed seamless mobile payments (through AliPay) into the fabric of daily life, as Tencent takes charge of communications and Huawei works on hardware and 5G buildout (not to mention its signature smartphones).
But it isn’t just driverless vehicles that are changing the game for smart cities.
One of the most advanced city states on the planet, Singapore joins Dubai in envisioning a future of flying vehicles and optimized airway traffic flow.
As imagined by award-winning architect of Singapore’s first zero-carbon house, Jason Pomeroy, Singapore could in the not-too-distant future explore everything from air rights to flying car structures built above motorways and skyscrapers.
“Fast-forward 50 years from now. You already see drone technology getting so advanced, [so] why are we not sticking people into those drones. All of a sudden, your sky courts, your sky gardens, even your private terraces to your condo [become] landing platform[s] for your own personalized drone.”

Already, Singapore’s government is bolstering advanced programs to test drone capacity limits, with automated routing and private sector innovation. Most notably, Airbus’ ‘Skyways’ venture has begun building out its vision for urban air mobility in Singapore, where much of the company’s testing has taken place.
Yet, as megacities attract millions of new residents from across the planet, building out smart networks for autonomous and flying vehicles, one of our greatest priorities becomes smart city governance.
Smart Public Services & Optimized Urban Planning
With the rise of urbanization, I’m led to the conclusion that megacities will become the primary nodes of data acquisition, data integration and thereby the primary mechanism of governance.
In just over 10 years, the UN forecasts that around 43 cities will house over 10 million residents each. Autonomous and flying cars, delocalized work and education, and growing urban populations are all beginning to transform cities into interconnected, automated ecosystems, sprawled over vast swaths of geography.
Now more than ever, smart public services and automated security will be needed to serve as the glue that holds these megacities together. Public sector infrastructure and services will soon be hosted on servers, detached from land and physical form. And municipal governments will face the scale of city states, propelled by an upwards trend in sovereign urban hubs that run almost entirely on their own.
Take e-Estonia.
Perhaps the least expected on a list of innovative nations, this former Soviet Republic-turned digital society is ushering in an age of technological statecraft.
Hosting every digitizable government function on the cloud, Estonia could run its government almost entirely on a server.
Starting in the 1990s, Estonia’s government has covered the nation with ultra-high-speed data connectivity, laying down tremendous amounts of fiber-optic cable. By 2007, citizens could vote from their living rooms.
With digitized law, Estonia signs policies into effect using cryptographically secure digital signatures, and every stage of the legislative process is available to citizens online, including plans for civil engineering projects.
But it doesn’t stop there.
Citizens’ healthcare registry is run on the blockchain, allowing patients to own and access their own health data from anywhere in the world — X-rays, digital prescriptions, medical case notes — all the while tracking who has access.
And i-Voting, civil courts, land registries, banking, taxes, and countless e-facilities allow citizens to access almost any government service with an electronic ID and personal PIN online.
But perhaps Estonia’s most revolutionary breakthrough is its recently introduced e-citizenship.
With over 50,000 e-residents from across 157 countries, Estonia issues electronic IDs to remote ‘inhabitants’ anywhere in the world, changing the nature of city borders themselves. While e-residency doesn’t grant territorial rights, over 6,000 e-residents have already established companies within Estonia’s jurisdiction.
From start to finish, the process takes roughly three hours, and 98 percent of businesses are all established online, offering data security, offshore benefits, and some of the most efficient taxes on the planet.
After companies are registered online, taxes are near-entirely automated — calculated in minutes and transmitted to the Estonian government with unprecedented ease.
The implications of e-residency and digital governance are huge. As with any software, open-source code for digital governance could be copied perfectly at almost zero cost, lowering the barrier to entry for any megacity or village alike seeking its own urban e-services.
As Peter Diamandis good friend David Li often advocates, he’s seen thriving village startup ecosystems and e-commerce hotbeds take off throughout China’s countryside, resulting in the mass movement and meteoric rise of ‘Taobao Villages.’
As smart city governance becomes democratized, what’s to stop these or any other town from building out or even duplicating e-services?
But Estonia is not the only one pioneering rapid-fire government uses of blockchain technology.
Within the next year, Dubai aims to become the first city powered entirely by the Blockchain, a long-standing goal of H.H. Sheikh Mohammed bin Rashid Al Maktoum.
Posing massive savings, government adoption of blockchain not only stands to save Dubai over 5.5 billion dirham (or nearly US$1.5 billion), but is intended to roll out everything from a citywide cryptocurrency emCash, to an RTA-announced blockchain-based vehicle monitoring system.
Possibly a major future smart city staple, systems similar to this latter blockchain-based network could one day underpin AVs, flying taxis and on-demand Fly-as-a-Service personal drones.
With a similar mind to Dubai, multiple Chinese smart city pilots are quickly following suit.
Almost two years ago, China’s central government and President Xi Jinping designated a new megalopolis spanning three counties and rivaling almost every other Chinese special economic zone: Xiong’an New Area.
Deemed a “crucial [strategy] for the millennium to come,” Xiong’an is slated to bring in over 2.4 trillion RMB (a little over US$357 billion) in investment over the next decade, redirecting up to 6.7 million people and concentrating supercharged private sector innovation.
And forging a new partnership, Xiong’an plans to work in direct consultation with ConsenSys on ethereum-based platforms for infrastructure and any number of smart city use cases. Beyond blockchain, Xiong’an will rely heavily on AI and has even posited plans for citywide cognitive computing.
But any discussion of smart government services would be remiss without mention of Singapore.
One of the most resourceful, visionary megacities on the planet, Singapore has embedded advanced computational models and high-tech solutions in everything from urban planning to construction of its housing units.
Responsible for creating living spaces for nearly 80 percent of its residents (through government-provided housing), the nation’s Housing and Development Board (HBD) stands as an exemplar of disruptive government.
Singapore uses sophisticated computer models, enabling architects across the board to build environmentally optimized living and city spaces. Take Singapore’s simulated ocean breeze for optimized urban construction patterns.
As explained by HBD’s CEO Dr. Cheong Koon Hean, “Singapore is in the tropics, so we want to encourage the breezes to come through. Through computer simulation, you can actually position the blocks[,] public spaces [and] parks in such a way that help[s] you achieve this.”

And beyond its buildings, Singapore uses intricate, precision-layered infrastructure for essential services, down to water and electrical tunnels, commercial spaces underground, and complex transportation networks all beneath the city surface.
Even in the realm of feeding its citizens, Singapore is fast becoming a champion of vertical farming. It opened the world’s first commercial vertical farm over 6 years ago, aiming to feed the entire island nation with a fraction of the land use.
Whether giving citizens a vote on urban planning with the click of a button, or optimizing environmental conditions through public housing and commercial skyscrapers, smart city governance is a key pillar of the future.
Visions of the Future
Bringing together mega-economies, green city infrastructure and e-services that decimate inefficiency, future transportation and web-based urban services will shape how and where we live, on unthinkable dimensions.
Networked drones, whether personal or parcel deliveries, will circle layered airways, all operated using AI city brains and blockchain-based data infrastructures. Far below, driverless vehicles will give rise to on-demand Cars-as-a-Service, sprawling cities and newly unlocked real estate. And as growing megacities across the world begin grappling with next-gen technologies, who knows how many whimsical city visions and architectural plans will populate the Earth — and one day, even space.
Please keep me in mind as your life coach, openings for senior executive engagements, and board openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff
#innovation #engineer #engineering #tech #technology #artificialintelligence #AI #executive #business #CXO #CEO #executive #success #work #follow #leadership #travel #corporate #office #luxury #entrepreneur #coaching #businessman #professional #aviation #excellence #development #motivation
Contributor: Peter Diamandis
AI Augments Healthcare and Longevity
AI Augments Healthcare and Longevity

When it comes to the future of healthcare, perhaps the only technology more powerful than CRISPR is Artificial Intelligence.
Over the past five years, healthcare AI startups around the globe raised over $4.3 billion across 576 deals, topping all other industries in AI deal activity.
During this same period, the FDA has given 70 AI healthcare tools and devices ‘fast-tracked approval’ because of their ability to save both lives and money.
The pace of AI-augmented healthcare innovation is only accelerating.
In Part 3 of this blog series on Longevity & Vitality, I cover the different ways in which AI is augmenting our healthcare system, enabling us to live longer and healthier lives.
In this blog, I’ll expand on:
- Machine learning and drug design
- Artificial Intelligence and Big Data in medicine
- Healthcare, AI & China
Let’s dive in.
Machine Learning in Drug Design
What if AI systems, specifically neural networks, could predict the design of novel molecules (i.e. medicines) capable of targeting and curing any disease?
Imagine leveraging cutting-edge artificial intelligence to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.
And what if these molecules, accurately engineered by AIs, always worked? Such a feat would revolutionize our $1.3 trillion global pharmaceutical industry, which currently holds a dismal record of 1 in 10 target drugs ever reaching human trials.
It’s no wonder that drug development is massively expensive and slow. It takes over 10 years to bring a new drug to market, with costs ranging from $2.5 billion to $12 billion.
This inefficient, slow-to-innovate, and risk-averse industry is a sitting duck for disruption in the years ahead.
One of the hottest startups in digital drug discovery today is Insilico Medicine.
Leveraging AI in its end-to-end drug discovery pipeline, Insilico Medicine aims to extend healthy longevity through drug discovery and aging research.
Their comprehensive drug discovery engine uses millions of samples and multiple data types to discover signatures of disease, identify the most promising protein targets, and generate perfect molecules for these targets.
These molecules either already exist or can be generated de novo with the desired set of parameters.
In late 2018, Insilico’s CEO Dr. Alex Zhavoronkov announced the groundbreaking result of generating novel molecules for a challenging protein target with an unprecedented hit rate in under 46 days. This included both synthesis of the molecules and experimental validation in a biological test system — an impressive feat made possible by converging exponential technologies.
Underpinning Insilico’s drug discovery pipeline is a novel machine learning technique called Generative Adversarial Networks (GANs), used in combination with deep reinforcement learning.
Generating novel molecular structures for diseases both with and without known targets, Insilico is now pursuing drug discovery in aging, cancer, fibrosis, Parkinson’s disease, Alzheimer’s disease, ALS, diabetes, and many others. Once rolled out, the implications will be profound.
Dr. Zhavoronkov’s ultimate goal is to develop a fully automated Health-as-a-Service (HaaS) and Longevity-as-a-Service (LaaS) engine.
Once plugged into the services of companies from Alibaba to Alphabet, such an engine would enable personalized solutions for online users, helping them prevent diseases and maintain optimal health.
Insilico, alongside other companies tackling AI-powered drug discovery, truly represents the application of the 6 D’s. What was once a prohibitively expensive and human-intensive process is now rapidly becoming digitized, dematerialized, demonetized and, perhaps most importantly, democratized.
Companies like Insilico can now do with a fraction of the cost and personnel what the pharmaceutical industry can barely accomplish with thousands of employees and a hefty bill to foot.
As discussed in Peter Diamandis blog on ‘The Next Hundred-Billion-Dollar Opportunity,’ Google’s DeepMind has now turned its neural networks to healthcare, entering the digitized drug discovery arena.
In 2017, DeepMind achieved a phenomenal feat by matching the fidelity of medical experts in correctly diagnosing over 50 eye disorders.
And just a year later, DeepMind announced a new deep learning tool called AlphaFold. By predicting the elusive ways in which various proteins fold on the basis of their amino acid sequences, AlphaFold may soon have a tremendous impact in aiding drug discovery and fighting some of today’s most intractable diseases.
Artificial Intelligence and Data Crunching
AI is especially powerful in analyzing massive quantities of data to uncover patterns and insights that can save lives.
Take WAVE, for instance.
Every year, over 400,000 patients die prematurely in U.S. hospitals as a result of heart attack or respiratory failure.
Yet these patients don’t die without leaving plenty of clues. Given information overload, however, human physicians and nurses alone have no way of processing and analyzing all necessary data in time to save these patients’ lives.
Enter WAVE, an algorithm that can process enough data to offer a six-hour early warning of patient deterioration.
Just last year, the FDA approved WAVE as an AI-based predictive patient surveillance system to predict and thereby prevent sudden death.
Another highly valuable yet difficult-to-parse mountain of medical data comprises the 2.5 million medical papers published each year.
For some time, it has become physically impossible for a human physician to read — let alone remember — all of the relevant published data.
To counter this compounding conundrum, Johnson & Johnson is teaching IBM Watson to read and understand scientific papers that detail clinical trial outcomes.
Enriching Watson’s data sources, Apple is also partnering with IBM to provide access to health data from mobile apps.
One such Watson system contains 40 million documents, ingesting an average of 27,000 new documents per day, and providing insights for thousands of users.
After only one year, Watson’s successful diagnosis rate of lung cancer has reached 90 percent, compared to the 50 percent success rate of human doctors.
But what about the vast amount of unstructured medical patient data that populates today’s ancient medical system? This includes medical notes, prescriptions, audio interview transcripts, pathology and radiology reports.
In late 2018, Amazon announced a new HIPAA-eligible machine learning service that digests and parses unstructured data into categories, such as patient diagnosis, treatments, dosages, symptoms and signs.
Taha Kass-Hout, Amazon’s senior leader in health care and artificial intelligence, told the WSJ that internal tests demonstrated that the software even performs as well as or better than other published efforts.
On the heels of this announcement, Amazon confirmed it was teaming up with the Fred Hutchinson Cancer Research Center to evaluate “millions of clinical notes to extract and index medical conditions.”
Having already driven extraordinary algorithmic success rates in other fields, data is the healthcare industry’s goldmine for future innovation.
Healthcare, AI & China
In 2017, the Chinese government published its ambitious national plan to become a global leader in AI research by 2030, with healthcare listed as one of four core research areas during the first wave of the plan.
Just a year earlier, China began centralizing healthcare data, tackling a major roadblock to developing longevity and healthcare technologies (particularly AI systems): scattered, dispersed, and unlabeled patient data.
Backed by the Chinese government, China’s largest tech companies — particularly Tencent — have now made strong entrances into healthcare.
Just recently, Tencent participated in a $154 million megaround for China-based healthcare AI unicorn iCarbonX.
Hoping to develop a complete digital representation of your biological self, iCarbonX has acquired numerous U.S. personalized medicine startups.
Considering Tencent’s own Miying healthcare AI platform — aimed at assisting healthcare institutions in AI-driven cancer diagnostics — Tencent is quickly expanding into the drug discovery space, participating in two multimillion-dollar, U.S.-based AI drug discovery deals just this year.
China’s biggest, second-order move into the healthtech space comes through Tencent’s WeChat. In the course of a mere few years, already 60 percent of the 38,000 medical institutions registered on WeChat allow patients to digitally book appointments through Tencent’s mobile platform.
At the same time, 2,000 Chinese hospitals accept WeChat payments.
Tencent has additionally partnered with the U.K.’s Babylon Health, a virtual healthcare assistant startup whose app now allows Chinese WeChat users to message their symptoms and receive immediate medical feedback.
Similarly, Alibaba’s healthtech focus started in 2016 when it released its cloud-based AI medical platform, ET Medical Brain, to augment healthcare processes through everything from diagnostics to intelligent scheduling.
Conclusion
As Nvidia CEO Jensen Huang has stated, “Software ate the world, but AI is going to eat software.” Extrapolating this statement to a more immediate implication, AI will first eat healthcare, resulting in dramatic acceleration of longevity research and an amplification of the human healthspan.
Next week, I’ll continue to explore this concept of AI systems in healthcare.
Particularly, I’ll expand on how we’re acquiring and using the data for these doctor-augmenting AI systems: from ubiquitous biosensors, to the mobile healthcare revolution, and finally, to the transformative power of the health nucleus.
As AI and other exponential technologies increase our healthspan by 30 to 40 years, how will you leverage these same exponential technologies to take on your Moonshots and live out your Massively Transformative Purpose?

Please keep me in mind as your life coach, openings for senior executive engagements, and board openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
MedTech #pharma #innovation #HealthTech #biotech #biotechnology #science #biology #research #scientist #BoD #CEO #VC #WSJ
Contributor: Peter Diamandis
How to combine easy to create Crowd Solution with AI to acheive superhuman performance levels
How to combine easy to create Crowd Solution with AI to acheive superhuman performance levels
AI and the Crowd: A Collaboration
This article will provide you live examples of AI and the Crowd, creating an effective collaboration. We start with the field of Biology. The machinery of biology is built from proteins, and a protein’s shape defines its function.
One of the most challenging (and consequential) problems in modern medicine revolves around predicting the structure of proteins based on their amino acid sequences.
The human body can make vast numbers of different proteins, with estimates ranging in the tens of thousands. How a protein folds into a 3D structure depends on the number and types of amino acids it contains.
Normally, proteins take on whatever shape is most energy efficient, but they can become tangled and mis-folded, leading to disorders such as Parkinson’s and Alzheimer’s disease.
A protein can twist and bend between each amino acid, so that a protein with hundreds of amino acids has the potential to take on a staggering number of different structures: 1 followed by 300 zeroes (i.e. more possible ways to fold than there are atoms in the universe).
Understanding the mechanics of protein folding would be a boon to medicine and drug discovery. Yet up until a decade ago, figuring out which of the many folding configurations a protein would take was a problem relegated (in vain) to the best of supercomputers….
Until one team of researchers at the University of Washington launched an online game called FoldIt.
FoldIt: The Crowd Solution
Aiming to crowdsource this seemingly impossible ‘puzzle,’ FoldIt gives players a sequence of amino acids, which users can then experiment with and fold into any number of structures.
More than just a number-crunching task, finding the right folding combo is in part a game of intuition. And in less than a few weeks, FoldIt proved that the human brain’s pattern recognition capacity in aggregate could outperform even the most complex of computer programs.
Within no time of FoldIt’s release in 2008, tens of thousands of online gamers signed up to compete — young, old, individual puzzle players, and even competitive teams of aspiring scientists.
Their goal? Click, drag and pull a given protein’s chains into configurations that minimize energy, just as molecules self-assemble in real life.
Seeing the bigger picture, leveraging spatial reasoning, and following gut instincts on the basis of ‘what doesn’t look right,’ the crowd consistently outperformed its software counterpart, yielding tremendous contributions to Alzheimer’s and cancer research.
And in 2011, the FoldIt community landed a key victory in the fight against HIV/AIDS.
Over the course of the preceding decade, researchers had struggled with countless methods to determine the structure of a retroviral protease of the Mason-Pfizer monkey virus, a critical enzyme in the replication of HIV. Yet failure after failure left top scientists baffled, unable to solve the protein’s crystal structure.
In a last-ditch attempt to leverage the crowd, one Polish scientist turned to FoldIt’s army of puzzle-solvers….
Within just ten days, one team of online gamers scattered across three continents solved the viral protein’s structure, crowning ten years of hard work with a ten-day victory.
And until very recently, this was the best possible option for predicting protein folding…
Enter AlphaFold
Having reached superhuman performance levels in the games of chess and Go, Google’s DeepMind recently turned its neural networks to healthcare.
In 2018, DeepMind announced a new deep learning tool called AlphaFold for predicting protein folding to aid drug discovery.
To monitor progress of the software, a biannual protein-folding competition called the ‘Critical Assessment of Structure Prediction (CASP)’ was created.
The rules are simple. Teams are all given an amino acid sequence, and the team that submits the most accurate structure prediction wins.
On its first foray into the competition, AlphaFold won hands-down against a field of 98 entrants, predicting the most accurate structure for 25 out of 43 proteins, compared to the second-place team, which was only able to predict 3 out of 43 proteins.
How fast does AlphaFold work? The program initially took a couple of weeks to predict a protein structure, but now creates predicted models in a couple of hours.
AlphaFold is only the beginning of AI’s quest to make a real impact in the realm of disease treatment and medical breakthroughs.
As Peter Diamandis has predict, DeepMind’s victory represents the second step in an evolution from the crowd to pure AI, whereby AI is now beginning to take over highly complex tasks from the interim step of the Crowd.
In the meantime…. what if we could combine the collective intelligence of the crowd with the computational power of machines?
AI and the Crowd: A Collaboration
Today, we occupy a rare moment in history where AI can facilitate and even enhance the genius of collective human intelligence, or what we might call the ‘hive mind.’
Back in 2016, Eric Schmidt suggested that the next Google will be a crowdsourcing AI company:
“[That] model, [in which] you crowdsource information in, you learn it, and then you sell it, is in my view a highly-likely candidate for the next $100 billion corporations.”
“If I was starting a company, I’d start with that premise today. How can I use this concept of scalability and get my users to teach me? If my users teach me and I can sell to them and others a service that is better than their knowledge, it’s a win for everybody.”
Complementary forces, AI and crowdsourced wisdom offer radically different benefits.
In the case of collective intelligence, humans have the major advantage of intuition.
Instead of number-crunching our way through any problem, we know when to crunch numbers or which tools to use, and can redefine complex puzzles from creative new vantage points.
Unlike pretty much all AI systems, we can also often explain the reasoning behind our decisions and choices, the logic driving our conclusions.
And when combined, the aggregate predictions of crowds tend to be extraordinarily accurate, far exceeding individual estimates.
But even aggregating the expertise and ideas of thousands of minds has its limits and inaccuracies.
Enter AI-aided swarm intelligence.
Already, MIT’s Center for Collective Intelligence is working to combine the best of collective genius with machine systems that optimize our productivity, hive mind solutions, company profits and even the methods we use to think about difficult issues.
If two brains are better than one, how could we take advantage of a hundred, a thousand, or even 8 billion?
And MIT isn’t alone.
Now, a company called Unanimous A.I. has developed swarm AI-based software solutions that connect people and their collective expertise, settling on crowdsourced answers in real-time.
With Swarm AI, Unanimous’ crowdsourced predictions — from sports wagers to Oscar betting — now outperform both top expert forecasts and pure AI-generated projections.
Given any question, Swarm AI projects several possible answers on a screen, measuring the confidence with which each player pulls a virtual bubble toward their preferred answer.
Aggregating “collective confidence” of the group, Unanimous’ algorithm then settles on an answer, outperforming all traditional voting systems.
Imagine the implications: crowdsourced medical diagnoses, financial predictions, even tech-aided democracies and moral value judgments…
Already, Swarm AI-moderated group predictions have offered a tremendous upgrade to radiological evaluations. Moderating the assessments of eight leading Stanford radiologists regarding whether 50 chest X-rays showed signs of pneumonia, Unanimous’ software yielded a group prediction 33 percent more accurate than any individual evaluation.
And some have even posited the use of ASI (Artificial Swarm Intelligence) in determining ethical judgment calls.
Final Thoughts
The use of crowdsourcing to train AI systems is one of the most overlooked, deceptively growing and MONUMENTAL industries of the next decade….
If artificial intelligence is the electricity of the 21st century, collective intelligence will soon be its most valuable fuel.
And as we continue to approach the merging of mind and machine at an ever-accelerating pace, just imagine the unprecedented new solutions we can create, together.

Please keep me in mind as your life coach, openings for senior executive engagements, and board openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way.
Email me: [email protected] or Schedule a call: Cliff LocksContributor: Peter Diamandis
Machines Will Do More Work Than Humans By 2025, Says The WEF
Machines Will Do More Work Than Humans By 2025, Says The WEF
The World Economic Forum has just released its latest AI job forecast, projecting changes to the job market on a historic scale. While machines currently constitute roughly 29 percent of total hours worked in major industries — a fraction of the 71 percent accounted for by people — the WEF predicts that in just 4 years, this ratio will begin to equalize (with 42 percent total hours accounted for by AI-geared robotics). But perhaps the report’s most staggering projection is that machine learning and digital automation will eliminate 75 million jobs by 2025. However, as new industries emerge and technological access allows people to adopt never-before-heard-of professions, the WEF offers a hopeful alternative, predicting the creation of nearly 133 million new roles aided by the very technologies currently displacing many in our workforce.
Why it’s important: Already, more than 57 million workers — nearly 36 percent of the U.S. workforce — freelance. And based on today’s workforce growth rates as assessed by 2017’s Freelancing in America report, the majority of America’s workforce will freelance by 2027. Advancements in connectivity, AI and data proliferation will free traditional professionals to provide the services we do best. Doctors supplemented by AI-driven diagnostics may take more advisory roles, teachers geared with personalized learning platforms will soon be freed to serve as mentors, and barriers to entry for entrepreneurs — regardless of socioeconomic background — will dramatically decline. http://bit.ly/2xCrKCD
Please keep me in mind as your life coach, openings for senior executive engagements, and board openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. [email protected]
Contributor: Peter Diamandis