Predictive Mapping with Artificial Intelligence Powerful Combination

Between 2005 and 2014, natural disasters have claimed the lives of over 700,000 people and resulted in total damage of more than US$1.4 trillion.
During the past 50 years, the frequency of recorded natural disasters has surged nearly five-fold.
And as wildfires grow increasingly untamable, wreaking havoc across regions like the Amazon and California, the need for rapid response and smart prevention is higher than ever.
In this blog, I’ll be exploring how converging exponential technologies (AI, Robotics, Drones, Sensors, Networks) are transforming the future of disaster relief — how we can prevent catastrophe in the first place and get help to victims during that first golden hour wherein immediate relief can save lives.
Here are the three areas of greatest impact:
- AI, predictive mapping, and the power of the crowd
- Next-gen robotics and swarm solutions
- Aerial drones and immediate aid supply
Let’s dive in!
When it comes to immediate and high-precision emergency response, data is gold.
Already, the meteoric rise of space-based networks, stratosphere-hovering balloons, and 5G telecommunications infrastructure is in the process of connecting every last individual on the planet.
Aside from democratizing the world’s information, however, this upsurge in connectivity will soon grant anyone the ability to broadcast detailed geotagged data, particularly those most vulnerable to natural disasters.
Armed with the power of data broadcasting and the force of the crowd, disaster victims now play a vital role in emergency response, turning a historically one-way blind rescue operation into a two-way dialogue between connected crowds and smart response systems.
With a skyrocketing abundance of data, however, comes a new paradigm: one in which we no longer face a scarcity of answers. Instead, it will be the quality of our questions that matters most.
This is where AI comes in: our mining mechanism.
In the case of emergency response, what if we could strategically map an almost endless amount of incoming data points? Or predict the dynamics of a flood and identify a tsunami’s most vulnerable targets before it even strikes? Or even amplify critical signals to trigger automatic aid by surveillance drones and immediately alert crowdsourced volunteers?
Already, a number of key players are leveraging AI, crowdsourced intelligence, and cutting edge visualizations to optimize crisis response and multiply relief speeds.
Take One Concern, for instance.
Born out of Stanford under the mentorship of leading AI expert Andrew Ng, One Concern leverages AI through analytical disaster assessment and calculated damage estimates.
Partnering with the City of Los Angeles, San Francisco, and numerous cities in San Mateo County, the platform assigns verified, unique ‘digital fingerprints’ to every element in a city. Building robust models of each system, One Concern’s AI platform can then monitor site-specific impacts of not only climate change but each individual natural disaster, from sweeping thermal shifts to seismic movement.
This data, combined with that of city infrastructure and former disasters, are then used to predict future damage under a range of disaster scenarios, informing prevention methods and structures in need of reinforcement.
Within just four years, One Concern can now make precise predictions with an 85 percent accuracy rate under 15 minutes.
And as IoT-connected devices and intelligent hardware continue to boom, a blooming trillion-sensor economy will only serve to amplify AI’s predictive capacity, offering us immediate, preventive strategies long before disaster strikes.
Take forest fires, for instance.
Utah University atmospheric scientist Adam Kochanski and a team of researchers are now refining a computer model with new data to predict how fires will spread and what weather events will follow in their wake.
Initiating a “prescribed fire” — a controlled fire typically intended for habitat restoration in forest regions — the team used numerous infrared camera-fitted drones, laser scanning, and sensors to collect data while Kochanski tested his predictive model’s forecasts.
While generated data is still being processed, the experiment is contributing to ‘coupled fire-atmosphere models,’ which leverage data to determine how wildfires influence local weather conditions, and the interaction of the two. Yet already, Kochanski’s model proved remarkably predictive of the experimental fire’s actual behavior.
Paired with robust networks of sensors and autonomous drone fleets, computer models that incorporate weather conditions in AI forest fire mapping could help us to stem early fires before they gain momentum, saving forests, lives, and entire habitats.
As mobile connectivity and abundant sensors converge with AI-mined crowd intelligence, real-time awareness will only multiply in speed and scale.
Imagining the Future….
Within the next 10 years, spatial web technology might even allow us to tap into mesh networks.
In short, this means that individual mobile users can together establish a local mesh network using nothing but the compute power in their own devices.
Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.
Imagine a scenario in which armed attacks break out across disjointed urban districts, each cluster of eye witnesses and at-risk civilians broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram in real time, giving family members and first responders complete information.
Or take a coastal community in the throes of torrential rainfall and failing infrastructure. Now empowered by a collective live feed, verification of data reports takes a matter of seconds, and richly layered data informs first responders and AI platforms with unbelievable accuracy and specificity of relief needs.
By linking all the right technological pieces, we might even see the rise of automated drone deliveries. Imagine: crowdsourced intelligence is first cross-referenced with sensor data and verified algorithmically. AI is then leveraged to determine the specific needs and degree of urgency at ultra-precise coordinates. Within minutes, once approved by personnel, swarm robots rush to collect the requisite supplies, equipping size-appropriate drones with the right aid for rapid-fire delivery.
This brings us to a second critical convergence: robots and drones.
While cutting-edge drone technology revolutionizes the way we deliver aid, new breakthroughs in AI-geared robotics are paving the way for superhuman emergency responses in some of today’s most dangerous environments.
Let’s explore a few of the most disruptive examples to reach the testing phase.
First up….
Autonomous Robots and Swarm Solutions
As hardware advancements converge with exploding AI capabilities, disaster relief robots are graduating from assistance roles to fully autonomous responders at a breakneck pace.
Born out of MIT’s Biomimetic Robotics Lab, the Cheetah III is but one of many robots that may form our first line of defense in everything from earthquake search-and-rescue missions to high-risk ops in dangerous radiation zones.
Now capable of running at 6.4 meters per second, Cheetah III can even leap up to a height of 60 centimeters, autonomously determining how to avoid obstacles and jump over hurdles as they arise.

Source: Massachusetts Institute of Technology (MIT)
Initially designed to perform spectral inspection tasks in hazardous settings (think: nuclear plants or chemical factories), the Cheetah’s various iterations have focused on increasing its payload capacity, range of motion, and even a gripping function with enhanced dexterity.
But as explained by the Lab’s director and MIT Associate Professor Sangbae Kim, Cheetah III and future versions are aimed at saving lives in almost any environment: “Let’s say there’s a fire or high radiation, [whereby] nobody can even get in. [It’s in these circumstances that] we’re going to send a robot [to] check if people are inside. [And even] before doing all that, the short-term goal will be sending robot where we don’t want to send humans at all, […] for example, toxic areas or [those with] mild radiation.”
And the Cheetah III is not alone.
This past February, Tokyo’s Electric Power Company (TEPCO) put one of its own robots to the test.
For the first time since Japan’s devastating 2011 tsunami, which led to three nuclear meltdowns in the nation’s Fukushima nuclear power plant, a robot has successfully examined the reactor’s fuel.
Broadcasting the process with its built-in camera, the robot was able to retrieve small chunks of radioactive fuel at five of the six test sites, offering tremendous promise for long-term plans to clean up the still-deadly interior.
Also out of Japan, Mitsubishi Heavy Industries (MHi) is even using robots to fight fires with full autonomy. In a remarkable new feat, MHi’s Water Cannon Bot can now put out blazes in difficult-to-access or highly dangerous fire sites.
Delivering foam or water at 4,000 liters per minute and 1 megapascal (MPa) of pressure, the Cannon Bot and its accompanying Hose Extension Bot even form part of a greater AI-geared system to conduct reconnaissance and surveillance on larger transport vehicles.
As wildfires grow ever more untamable, high-volume production of such bots could prove a true lifesaver. Paired with predictive AI forest fire mapping and autonomous hauling vehicles, not only will solutions like MHi’s Cannon Bot save numerous lives, but avoid population displacement and paralyzing damage to our natural environment before disaster has the chance to spread.
But even in cases where emergency shelter is needed, groundbreaking (literally) robotics solutions are fast to the rescue.
After multiple iterations by Fastbrick Robotics, the Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square meter home in under 3 days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.

Source: Fastbrick Robotics
Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long-term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.
But what if we need to build emergency shelters from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute of Advanced Architecture of Catalonia (IAAC) is already working on a solution.
In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.
But while cutting edge robotics unlock extraordinary new frontiers for low-cost, large-scale emergency construction, novel hardware and computing breakthroughs are also enabling robotic scale at the other extreme of the spectrum.
Again, inspired by biological phenomena, robotics specialists across the U.S. have begun to pilot tiny robotic prototypes for locating trapped individuals and assessing infrastructural damage.
Take RoboBees, tiny Harvard-developed bots that use electrostatic adhesion to ‘perch’ on walls and even ceilings, evaluating structural damage in the aftermath of an earthquake.
Or Carnegie Mellon’s prototyped Snakebot, capable of navigating through entry points that would otherwise be completely inaccessible to human responders. Driven by AI, the Snakebot can maneuver through even the most densely packed rubble to locate survivors, using cameras and microphones for communication.
But when it comes to fast-paced reconnaissance in inaccessible regions, miniature robot swarms have good company.
Next-Generation Drones for Instantaneous Relief Supplies
Particularly in the case of wildfires and conflict zones, autonomous drone technology is fundamentally revolutionizing the way we identify survivors in need and automate relief supply.
Not only are drones enabling high-resolution imagery for real-time mapping and damage assessment, but preliminary research shows that UAVs far outpace ground-based rescue teams in locating isolated survivors.
As presented by a team of electrical engineers from the University of Science and Technology of China, drones could even build out a mobile wireless broadband network in record time using a “drone-assisted multi-hop device-to-device” program.
And as shown during Houston’s Hurricane Harvey, drones can provide scores of predictive intel on everything from future flooding to damage estimates.
Among multiple others, a team led by Texas A&M computer science professor and director of the university’s Center for Robot-Assisted Search and Rescue Dr. Robin Murphy flew a total of 119 drone missions over the city, from small-scale quadcopters to military-grade unmanned planes. Not only were these critical for monitoring levee infrastructure, but also for identifying those left behind by human rescue teams.
But beyond surveillance, UAVs have begun to provide lifesaving supplies across some of the most remote regions of the globe.
One of the most inspiring examples to date is Zipline.
Created in 2014, Zipline has completed 12,352 life-saving drone deliveries to date. While drones are designed, tested and assembled in California, Zipline primarily operates in Rwanda and Tanzania, hiring local operators and providing over 11 million people with instant access to medical supplies.
Providing everything from vaccines and HIV medications to blood and IV tubes, Zipline’s drones far outpace ground-based supply transport, in many instances providing life-critical blood cells, plasma and platelets in under an hour.

Source: Zipline
But drone technology is even beginning to transcend the limited scale of medical supplies and food.
Now developing its drones under contracts with DARPA and the U.S. Marine Corps, Logistic Gliders, Inc. has built autonomously navigating drones capable of carrying 1,800 pounds of cargo over unprecedented long distances.
Built from plywood, Logistic’s gliders are projected to cost as little as a few hundred dollars each, making them perfect candidates for high-volume, remote aid deliveries, whether navigated by a pilot or self-flown in accordance with real-time disaster zone mapping.
As hardware continues to advance, autonomous drone technology coupled with real-time mapping algorithms pose no end of abundant opportunities for aid supply, disaster monitoring, and richly layered intel previously unimaginable for humanitarian relief.
Concluding Thoughts
Perhaps one of the most consequential and impactful applications of converging technologies is their transformation of disaster relief methods.
While AI-driven intel platforms crowdsource firsthand experiential data from those on the ground, mobile connectivity and drone-supplied networks are granting newfound narrative power to those most in need.
And as a wave of new hardware advancements gives rise to robotic responders, swarm technology and aerial drones, we are fast approaching an age of instantaneous and efficiently distributed responses, in the midst of conflict and natural catastrophes alike.
Empowered by these new tools, what might we create when everyone on the planet has the same access to relief supplies and immediate resources? In a new age of prevention and fast recovery, what futures can you envision?

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#5G #Automotive #BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #AR #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
Let’s get educated on the future of transportation, it’s faster than autonomous vehicles and flying cars
Let’s get educated on the future of transportation, it’s faster than autonomous vehicles and flying cars

What’s faster than autonomous vehicles and flying cars?
Try Hyperloop, rocket travel and robotic avatars.
Hyperloop is currently working towards 670 mph (1080 kph)passenger pods, capable of zipping us from Los Angeles to downtown Las Vegas in under 30 minutes.
Rocket Travel (think SpaceX’s Starship) promises to deliver you almost anywhere on the planet in under an hour. Think New York to Shanghai in 39 minutes.
But wait, it gets even better…
As 5G connectivity, hyper-realistic VR, and next-gen robotics continue their exponential progress, the emergence of “Robotic Avatars” will all but nullify the concept of distance, replacing human travelwith immediate remote telepresence.
Let’s dive in.
Hyperloop One: LA to SF in 35 Minutes
Did you know that Hyperloop was the brainchild of Elon Musk? …just one in a series of transportation innovations from a man determined to leave his mark on the industry.
In 2013, in an attempt to shorten the long commute between Los Angeles and San Francisco, the California state legislature proposed a $68 billion budget allocation for what appeared to be the slowest and most expensive bullet train in history.
Musk was outraged. The cost was too high, the train too sluggish. Teaming up with a group of engineers from Tesla and SpaceX, he published a 58-page concept paper for “The Hyperloop,” a high-speed transportation network that used magnetic levitation to propel passenger pods down vacuum tubes at speeds of up to 670 mph.
If successful, it would zip you across California in 35 minutes—just enough time to watch your favorite sitcom.
In January 2013, venture capitalist Shervin Pishevar, with Musk’s blessing, started Hyperloop One with myself, Jim Messina (former White House Deputy Chief of Staff for President Obama), and tech entrepreneurs Joe Lonsdale and David Sacks, as founding board members.
A couple of years after that, the Virgin Group invested in this idea, Richard Branson was elected chairman, and Virgin Hyperloop One was born.
“The Hyperloop exists,” says Josh Giegel, co-founder and chief technology officer of Hyperloop One, “because of the rapid acceleration of power electronics, computational modeling, material sciences, and 3D printing.”
Thanks to these convergences, there are now ten major Hyperloop One projects—in various stages of development—spread across the globe. Chicago to DC in 35 minutes. Pune to Mumbai in 25 minutes.
According to Giegel: “Hyperloop is targeting certification in 2023. By 2025, the company plans to have multiple projects under construction and running initial passenger testing.”
So think about this timetable: Autonomous car rollouts by 2020. Hyperloop certification and aerial ridesharing by 2023. By 2025— going on vacation might have a totally different meaning. Going to work most definitely will.
But what’s faster than Hyperloop?
Rocket Travel
As if autonomous vehicles, flying cars, and Hyperloop weren’t enough, in September of 2017, speaking at the International Astronautical Congress in Adelaide, Australia, Musk promised that for the price of an economy airline ticket, his rockets will fly you “anywhere on Earth in under an hour.”
Musk wants to use SpaceX’s megarocket, Starship, which was designed to take humans to Mars, for terrestrial passenger delivery. The Starship travels at 17,500 mph. It’s an order of magnitude faster than the supersonic jet Concorde.
Think about what this actually means: New York to Shanghai in thirty-nine minutes. London to Dubai in twenty-nine minutes. Hong Kong to Singapore in twenty-two minutes.
So how real is the Starship?
“We could probably demonstrate this [technology] in three years,” Musk explained, “but it’s going to take a while to get the safety right. It’s a high bar. Aviation is incredibly safe. You’re safer on an airplane than you are at home.”
That demonstration is proceeding as planned. In September 2017, Musk announced his intentions to retire his current rocket fleet, both the Falcon 9 and Falcon Heavy, and replace them with the Starships in the 2020s.
Less than a year later, LA mayor Eric Garcetti tweeted that SpaceX was planning to break ground on an eighteen-acre rocket production facility near the port of Los Angeles.
And April of this year marked an even bigger milestone: the very first test flights of the rocket.
Thus, sometime in the next decade or so, “off to Europe for lunch” may become a standard part of our lexicon.
Avatars
Wait, wait, there’s one more thing.
While the technologies we’ve discussed will decimate the traditional transportation industry, there’s something on the horizon that will disrupt travel itself.
What if, to get from A to B, you didn’t have to move your body? What if you could quote Captain Kirk and just say: “Beam me up, Scotty.”
Well, shy of the Star Trek transporter, there’s the world of avatars.
An avatar is a second self, typically in one of two forms. The digital version has been around for a couple of decades. It emerged from the video game industry and was popularized by virtual world sites like Second Life and books-turned-blockbusters like Ready Player One.
A VR headset teleports your eyes and ears to another location, while a set of haptic sensors shifts your sense of touch. Suddenly, you’re inside an avatar inside a virtual world. As you move in the real world, your avatar moves in the virtual.
Use this technology to give a lecture and you can do it from the comfort of your living room, skipping the trip to the airport, the cross-country flight, and the ride to the conference center.
Robots are the second form of avatars. Imagine a humanoid robot that you can occupy at will. Maybe, in a city far from home, you’ve rented the bot by the minute—via a different kind of ridesharing company—or maybe you have spare robot avatars located around the country.
Either way, put on VR goggles and a haptic suit, and you can teleport your senses into that robot. This allows you to walk around, shake hands, and take action—all without leaving your home.
And like the rest of the tech we’ve been talking about, even this future isn’t far away.
In 2018, entrepreneur Dr. Harry Kloor recommended to All Nippon Airways (ANA), Japan’s largest airline, the design of an Avatar XPRIZE. ANA then funded this vision to the tune of $10 million to speed the development of robotic avatars. Why? Because ANA knows this is one of the technologies likely to disrupt their own airline industry, and they want to be ready.
ANA recently announced its “newme” robot that humans can use to virtually explore new places. The colorful robots have Roomba-like wheeled bases and cameras mounted around eye-level, which capture surroundings viewable through VR headsets.
If the robot was stationed in your parents’ home, you could cruise around the rooms and chat with your family at any time of day. After revealing the technology at Tokyo’s Combined Exhibition of Advanced Technologies in October, ANA plans to deploy 1,000 newme’s by 2020.
With virtual avatars like “newme,” geography, distance, and cost will no longer limit our travel choices.
From attractions like the Eiffel Tower or the pyramids of Egypt, to unreachable destinations like the Moon or deep sea, we will be able to transcend our own physical limits, explore the world and outer space, and access nearly any experience imaginable.
Final Thoughts
Individual car ownership has enjoyed over a century of ascendency (power) and dominance.
The first real threat it faced—today’s ride-sharing model—only showed up in the last decade. But that ridesharing model won’t even get ten years to dominate.
Already, it’s on the brink of autonomous car displacement, which is on the brink of flying car disruption, which is on the brink of Hyperloop and rockets-to-anywhere decimation.
Plus, avatars. The most important part: All of this change will happen over the next ten years.
Welcome to a future of human presence where the only constant is rapid change.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#5G #Automotive #BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #AR #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
Learn about continuous, ultra-cheap, personalized, and proactive healthcare

Learn about continuous, ultra-cheap, personalized, and proactive healthcare
The U.S. healthcare industry is in for a major disruption in the decade ahead.
It is so broken, that it’s horrifying.
U.S. healthcare spending is expected to hit a total of US$3.6 trillion in 2019.
Fear of liability prompts U.S. doctors to spend US$210 billion per year on procedures patients don’t need.
Of every 5,000 new drugs introduced, 5 make it to human testing, and only one is ultimately approved. Yet even then, the average pharmaceutical product takes 12 YEARS to get from lab to patient, costing upwards of $2.5 billion.
Over the next five blogs, we’ll be diving into the digital medicine and biotech revolution unfolding before us. A new generation of AI-enabled, data-driven companies will transform what is today “sick care” into healthcare.
In this blog, we’ll look at a new generation of diagnostics that enable you to become the CEO of your own health. Ultimately, how you can catch disease at “stage-0” before it becomes life threatening.
Let’s dive in…
Continuous DIY Diagnostics
On a wintery Wednesday in January 2026, you’re being watched. Carefully watched.
Technically, you’re asleep in your bed, but Google’s home assistant knows your schedule. Thanks to your Oura ring, it also knows you’ve just completed a REM cycle and are now entering Stage 1 sleep—making it the perfect time to wake you up.
A gentle increase in the room’s lighting simulates the sunrise, while optimized light wavelengths maximize wakefulness and improve your mood.
“Hey Google, how’s my health this morning?” “One moment,” says your digital assistant.
It takes thirty seconds for the full diagnostic to run, which is pretty good considering the system deploys dozens of sensors capturing gigabytes of data.
Smart sensors in toothbrush and toilet, wearables in bedding and clothing, implantables inside your body—a mobile health suite with a 360-degree view of your system. “Your microbiome looks perfect,” Google tells you. “Also, blood glucose levels are good, vitamin levels fine…”
Google is developing a full range of internal and external sensors that monitor everything from blood sugar to blood chemistry.
And that’s just Google. The list of once multimillion-dollar medical machines now being dematerialized, demonetized, democratized, and delocalized—that is, made into portable and even wearable sensors—could fill a textbook.
Consider the spectrum of possibilities.
On the whiz-bang side, there’s Exo’s AI-enabled, cheap, handheld ultrasound 3D imager—meaning you will soon be able to track anything from wound-healing to fetus growth from the comfort of your home.
Or take former Google X project leader Mary Lou Jepsen’s startup, Openwater, which uses red laser holography to create a portable MRI (magnetic resonance imaging), turning what is today a multimillion-dollar machine into a wearable consumer electronics device. With successful rollout, products like that of Openwater could soon give three-quarters of the world access to medical imaging they currently lack.
Yet simpler developments might be more revolutionary.
In less than two decades, wearables have gone from first-generation step-counting self-trackers to Apple’s fourth-generation iWatch, which includes an FDA-approved ECG scanner capable of real-time cardiac monitoring.
Or look at Final Frontier Medical Devices’ DxtER (winner of the $10 million Qualcomm Tricorder XPRIZE): a collection of easy-to-use, noninvasive medical sensors, and a diagnostic AI, accessible via app. Already, DxtER reliably detects over fifty common ailments.
In convergence, these developments point towards a future of always-on health monitoring and cheap, easy diagnostics.
The technical term for this shift is “mobile health,” a field predicted to explode into a $102 billion market by 2022. Step aside, WebMD. The idea here is to put a virtual doctor, on demand, in your back pocket.
And we’re getting close.
Riding the convergence of networks, sensors, and computing, AI-backed medical chatbots are now flooding the market. These apps can diagnose everything from a rash to retinopathy.
And it’s not just physical ailments. Woebot is now taking on mental health, delivering cognitive behavioral therapy via Facebook Messenger to patients suffering from depression.
Proactive Healthcare
So where are these trends actually headed?
Take Human Longevity Inc., a company Peter Diamandis, co-founded in 2013. Its key offering, the “Health Nucleus” is an annual, three-hour health scan consisting of whole genome sequencing, whole body MRI, heart and lung CT, echocardiogram, and a slew of clinical blood tests—essentially the most complete picture of health currently available.
This picture is important for two reasons. The first is early disease detection.
In 2018, Human Longevity published stats on its first 1,190 clients. Nine percent of its patients uncovered previously undetected coronary artery disease (the number one killer in the world), 2.5 percent found aneurysms (the number two killer in the world), 2 percent saw tumors—and so forth. In total, a staggering 14.4 percent had significant issues requiring immediate intervention, while 40 percent found a condition that needed long-term monitoring.
The second reason this is important? Everything Human Longevity is measuring and tracking via half-day annual visits will soon come to you on demand. Thanks to always-on, always-watching sensors, your smartphone is about to become your doctor.
From Damage Control to Prevention
Skyrocketing AI capabilities, dematerializing sensors, and next-gen computing power are on the verge of embedding themselves in your wearables, home, future AR devices, and—one day—implantables.
In success, today’s era of lengthy, expensive, and reactive ‘sickcare’—mediated by insurance middlemen—is giving way to continuous, ultra-cheap, personalized, and proactive healthcare.
Soon to own our (technological) doctors (not to mention our health data), we will no longer correct for risk once losses are incurred. Instead, we’ll be minimizing risk 24/7, at extraordinarily low cost, without even thinking about it.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#Healthcare #5G #BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #AR #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive doctors #doctor #medical #medicine #health #healthcare #hospital #medstudent #medschool #surgery #medicalschool #surgeon #hospitals #dentist #medicalstudent #futuredoctor #physician Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
Future of Virtual Reality moving from deceptive to disruptive
Future of Virtual Reality moving from deceptive to disruptive

In 2016, venture investments in VR exceeded US$800 million, while AR and MR received a total of $450 million. Just a year later, investments in AR and VR startups doubled to US$3.6 billion.
And today, major players are bringing VR headsets to market that have the power to revolutionize the industry, as well as countless others.
Already, VR headset sales volumes are expected to reach 98.4 million by 2023, according to Futuresource Consulting. But beyond headsets themselves, Facebook’s $399 Oculus Quest brought in US$5 million in content sales within the first two weeks post-release this past spring.
With companies like Niantic ($4B valuation), Improbable ($2B valuation), and Unity ($6B valuation) achieving unicorn status in recent years, the VR space is massively heating up.
In this blog, we will dive into a brief history of VR, recent investment surges, and the future of this revolutionary technology.
Brief History of VR
For all of history, our lives have been limited by the laws of physics and mitigated by the five senses. VR is rewriting those rules.
It’s letting us digitize experiences and teleport our senses into a computer-generated world where the limits of imagination become the only brake on reality. But it’s taken a while to get here.
Much like AI, the concept of VR has been around since the 60s. The 1980s saw the first false dawn, when the earliest “consumer-facing” systems began to show up. In 1989, if you had a spare $250,000, you could purchase the EyePhone before the iPhone, a VR system built by Jaron Lanier’s company VPL (Lanier coined the term ‘virtual reality’).
Unfortunately, the computer that powered that system was the size of a dorm room refrigerator, while the headset it required was bulky, awkward and only generated about five frames a second—six times slower than the average television of that era.
By the early 1990s, the hype had faded and VR entered a two-decade deceptive phase. Through the 2000s, the convergence of increasingly powerful game engines and AI-image rendering software flipped the script. Suddenly, deceptive became disruptive and the VR universe opened for business.
The Disruptive Phase: Surges in VR Investment
In 2012, Facebook spent $2 billion on Oculus Rift. By 2015, Venture Beat reported that an arena which typically saw only ten new entrants a year, suddenly had 234.
In June 2016, HTC announced the release of its ‘Business Edition’ of the Vive for $1,200, followed six months later by their announcement of a tether-less VR upgrade.
A year later, Samsung cashed in on this shift, selling 4.3 million headsets and turning enough heads that everyone from Apple and Google, to Cisco and Microsoft decided to investigate VR.
Phone-based VR showed up soon afterwards, dropping barriers to entry as low as $5. By 2018, the first wireless adaptors, standalone headsets and mobile headsets hit the market.
Resolution-wise, 2018 was also when Google and LG doubled their pixels-per-inch count and increased their refresh rate from VPL’s five frames a second to over 120.
Around the same time, the systems began targeting more senses than just vision. HEAR360’s “omni-binaural” microphone suite captures 360 degrees of audio, which means immersive sound has now caught up to immersive visuals.
Touch has also reached the masses, with haptic gloves, vests and full body suits hitting the consumer market. Scent emitters, taste simulators, and every kind of sensor imaginable—including brainwave readers—are all trying to put the “very” into verisimilitude.
And the number of virtual explorers continues to mount. In 2017, there were 90 million active users, which nearly doubled to 171 million by 2018. YouTube’s VR channel has over three million subscribers.
And that number is growing. By 2020, estimates put the VR market at $30 billion, and it’s hard to find a field that will be left untouched.
Future of VR: Emotive and Immersive Education
History class, 2030. This week’s lesson: Ancient Egypt. The pharaohs, the queens, the tombs—the full Tut.
Sure, you’d love to see the pyramids in person. But the cost of airfare? Hotel rooms for the entire class? Plus, taking two weeks off from school for the trip? None of these things are doable. Worse, even if you could go, you couldn’t go. Many of Egypt’s tombs are closed for repairs, and definitely off-limits to a group of teenagers.
Not to worry, VR solves these problems. And in VR world, you and your classmates can easily breach Queen Nefertari’s burial chamber, touch the hieroglyphics, even scramble atop her sarcophagus—impossible opportunities in physical reality. You also have a world-class Egyptologist as your guide.
But turning your attention to the back of the tomb doesn’t require waiting until 2030. In 2018, Philip Rosedale and his team at High Fidelity pulled off this exact virtual field trip.
First, they 3D-laser scanned every square inch of Queen Nefertari’s tomb. Next, they shot thousands of high resolution photos of the burial chamber. By stitching together more than ten thousand photos into a single vista, then laying that vista atop their 3D-scanned map, Rosedale created a stunningly accurate virtual tomb. Next, he gave a classroom full of kids HTC Vive VR headsets.
Because High Fidelity is a social VR platform, meaning multiple people can share the same virtual space at the same time, the entire class was able to explore that tomb together. In total, their fully immersive field trip to Egypt required zero travel time, zero travel expenses.
VR will not only cover traditional educational content, but also expand our emotional education.
Jeremy Bailenson, founding director of Stanford’s Virtual Human Interaction Lab, has spent two decades exploring VR’s ability to produce real behavioral change. He’s developed first-person VR experiences of racism, sexism, and other forms of discrimination.
For example, experiencing what it would be like to be an elderly, homeless, African American woman living on the streets of Baltimore produces real change: A significant shift in empathy and understanding.
“Virtual reality is not a media experience,” explains Bailenson. “When it’s done well, it’s an actual experience. In general our findings show that VR causes more behavior changes, causes more engagement, causes more influence than other types of traditional media.”
Nor is empathy the only emotion VR appears capable of training. In research conducted at USC, psychologist Skip Rizzo has had considerable success using virtual reality to treat PTSD in soldiers. Other scientists have extended this to the full range of anxiety disorders.
VR, especially when combined with AI, has the potential to facilitate a top shelf traditional education, plus all the empathy and emotional skills that traditional education has long been lacking.
When AI and VR converge with wireless 5G networks, our global education problem moves from the nearly impossible challenge of finding teachers and funding schools for the hundreds of millions in need, to the much more manageable puzzle of building a fantastic digital education system that we can give away for free to anyone with a headset. It’s quality and quantity on demand.
In the workplace, VR will serve as an efficient trainer for new employees.
10,000 of Walmart’s 1.2 million employees have taken VR-based skills management tests. Learning modules that once took 35 to 45 minutes, now take 3 to 5. The company plans to train 1 million employees using the Oculus VR headset by the end of this year. The upfront costs of VR headsets will ultimately be recovered in labor efficiencies.
Multiple Worlds, Multiple Economies
We no longer live in only one place. We have real-world personae and online personae. This delocalized existence is only going to expand. With the rise of AR and VR, we’re introducing more layers to this equation.
You’ll have avatars for work and avatars for play and all of these versions of ourselves are opportunities for new businesses. Consider the multi-million-dollar economy that sprung up around the very first virtual world, Second Life. People were paying other people to design digital clothes and digital houses for their digital avatars.
Every time we add a new layer to the digital strata, we’re also adding an entire economy built upon that layer, meaning we are now conducting our business in multiple worlds at once.
Reserve Peter Diamandis next book. If you’ve enjoyed the above blog much of it came from his up coming book The Future is Faster Than You Think and want to be notified when it comes out and get special offers (signed copies, free stuff, etc.), then register here to get early bird updates on the book and learn more!

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #AR #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
Exponential Disruption and Its Future Impact On You
Exponential Disruption and Its Future Impact On You

The average American meal travels 1,500-2,500 miles to get to your plate.
Food… What we eat, and how we grow it, will be fundamentally transformed in the next decade.
Already, vertical farming is projected to exceed a US$12 billion industry by mid-decade, surging at an astonishing 25 percent annual growth rate.
Meanwhile, the food 3D printing industry is expected to grow at an even higher rate, averaging nearly 40 percent annual growth.
And converging exponential technologies—from materials science to AI-driven digital agriculture—are not slowing down. Today’s breakthroughs will soon allow our planet to boost its food production by nearly 70 percent, using a fraction of the real estate and resources, to feed 9 billion by mid-century.
What you consume, how it was grown, and how it will end up in your stomach will all ride the wave of converging exponentials, revolutionizing the most basic of human needs.
Printing Food
3D printing has already had a profound impact on the manufacturing sector. We are now able to print in hundreds of different materials, making anything from toys to houses to organs. However, we are finally seeing the emergence of 3D printers that can print food itself.



Redefine Meat, an Israeli startup, wants to tackle industrial meat production using 3D printers that can generate meat, no animals required. The printer takes in fat, water, and three different plant protein sources, using these ingredients to print a meat fiber matrix with trapped fat and water, thus mimicking the texture and flavor of real meat.
Slated for release in 2020 at a cost of $100,000, their machines are rapidly demonetizing and will begin by targeting clients in industrial-scale meat production.
Anrich3D aims to take this process a step further, 3D-printing meals that are customized to your medical records, heath data from your smart wearables, and patterns detected by your sleep trackers. The company plans to use multiple extruders for multi-material printing, allowing them to dispense each ingredient precisely for nutritionally optimized meals. Currently in an R&D phase at the Nanyang Technological University in Singapore, the company hopes to have its first taste tests in 2020.
These are only a few of the many 3D food printing startups springing into existence. The benefits from such innovations are boundless.
Not only will food 3D printing grant consumers control over the ingredients and mixtures they consume, but it is already beginning to enable new innovations in flavor itself, democratizing far healthier meal options in newly customizable cuisine categories.
Vertical Farming
Vertical farming, whereby food is grown in vertical stacks (in skyscrapers and buildings rather than outside in fields), marks a classic case of converging exponential technologies. Over just the past decade, the technology has surged from a handful of early-stage pilots to a full-grown industry.
Today, the average American meal travels 1,500-2,500 miles to get to your plate. As summed up by Worldwatch Institute researcher Brian Halweil, “we are spending far more energy to get food to the table than the energy we get from eating the food.”
Additionally, the longer foods are out of the soil, the less nutritious they become, losing on average 45 percent of their nutrition before being consumed.
Yet beyond cutting down on time and transportation losses, vertical farming eliminates a whole host of issues in food production.
Relying on hydroponics and aeroponics, vertical farms allows us to grow crops with 90 percent less water than traditional agriculture—which is critical for our increasingly thirsty planet.



Currently, the largest player around is Bay Area-based Plenty Inc. With over $200 million in funding from Softbank, Plenty is taking a smart tech approach to indoor agriculture. Plants grow on 20-foot high towers, monitored by tens of thousands of cameras and sensors, optimized by big data and machine learning.
This allows the company to pack 40 plants in the space previously occupied by one. The process also produces yields 350X greater than outdoor farmland, using less than 1 percent as much water.
And rather than bespoke veggies for the wealthy few, Plenty’s processes allow them to knock 20-35 percent off the costs of traditional grocery stores. To date, Plenty has their home base in South San Francisco, a 100,000 square-foot farm in Kent, Washington, an indoor farm in the United Arab Emirates, and recently started construction on over 300 farms in China.
Another major player is New Jersey-based Aerofarms, which can now grow 2 million pounds of leafy greens without sunlight or soil.
To do this, Aerofarms leverages AI-controlled LEDs to provide optimized wavelengths of light for each individual plant. Using aeroponics, the company delivers nutrients by misting them directly onto the plants’ roots— no soil required. Rather, plants are suspended in a growth mesh fabric made from recycled water bottles. And here too, sensors, cameras and machine learning govern the entire process.
While 50-80 percent of the cost of vertical farming is human labor, autonomous robotics promises to solve that problem. Enter contenders like Iron Ox, a firm that has developed the Angus robot, capable of moving around plant-growing containers.
The writing is on the wall, and traditional agriculture is fast being turned on its head. As explained by Plenty’s CEO Matt Barnard, “Just like Google benefitted from the simultaneous combination of improved technology, better algorithms and masses of data, we are seeing the same [in vertical farming].”
Materials Science
In an era where materials science, nanotechnology, and biotechnology are rapidly becoming the same field of study, key advances are enabling us to create healthier, more nutritious, more efficient, and longer-lasting food.
For starters, we are now able to boost the photosynthetic abilities of plants.
Using novel techniques to improve a micro-step in the photosynthesis process chain, researchers at UCLA were able to boost tobacco crop yield by 14-20 percent. Meanwhile, the RIPE Project, backed by Bill Gates and run out of the University of Illinois, has matched and improved those numbers.
Tyton Bioenergy, based in Danville*, has been working with tobacco as a source of biofuel and oil. With tobacco plants that can grow, says Thibodeau, up to 15 feet high, Tyton can secure an awful lot of plant matter to press and process into the raw materials for biofuel. In fact, the company says: “This proprietary energy tobacco can produce up to three times the amount of ethanol per acre as corn and three times the oil per acre as soy.”
Now, Tyton says they’ve figured out a way to use the tobacco biofuel as jet fuel, putting them in the surprisingly and increasingly crowded space for tobacco-based jet fuels – Boeing has been working on something similar for a little while, though using South African tobacco rather than American.



In yet another win for food-related materials science, Santa Barbara-based Apeel Sciences is further tackling the vexing challenge of food waste. Now approaching commercialization, Apeel uses lipids and glycerolipids found in the peels, seeds, and pulps of all fruits and vegetables to create “cutin”—the fatty substance that composes the skin of fruits and prevents them from rapidly spoiling by trapping moisture.



And to top things off, The University of Essex was even able to improve tobacco yield by 27-47 percent by increasing the levels of protein involved in photo-respiration.
By then spraying fruits with this generated substance, Apeel can preserve foods 60 percent longer, using an odorless, tasteless, colorless organic substance.
And stores across the U.S. are already using this method. By leveraging our advancing knowledge of plants and chemistry, materials science is allowing us to produce more food with far longer-lasting freshness and more nutritious value than ever before.
Convergence
With advances in 3D printing, vertical farming and materials sciences, we can now make food smarter, more productive, and far more resilient.
By the end of the next decade, you should be able to 3D print a fusion cuisine dish from the comfort of your home, using ingredients harvested from vertical farms, with nutritional value optimized by AI and materials science. However, even this picture doesn’t account for all the rapid changes underway in the food industry.


Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #AR #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
Let’s get you educated on the Sensors Explosion & the Rise of IoT
Let’s get you educated on the Sensors Explosion & the Rise of IoT

“Hey Google, how’s my health this morning?”
“One moment,” says your digital assistant.
It takes thirty seconds for the full diagnostic to run, as the system deploys dozens of sensors capturing gigabytes of data.
Smart sensors in toothbrush and toilet, wearables in bedding and clothing, implantables inside your body—a mobile health suite with a 360-degree view of your system.

“Your microbiome looks perfect,” Google tells you. “Also, blood glucose levels are good, vitamin levels fine, but an increased core temperature and IgE levels…”
“Google—in plain English?”
“You’ve got a virus.”
“A what?”
“I ran through your last forty-eight hours of meetings. It seems like you picked it up Monday, at Jonah’s birthday party. I’d like to run additional diagnostics. Would you mind using the….?”
As the Internet of Things catapults to new heights, Google is developing a full range of internal and external sensors, monitoring everything from blood sugar to blood chemistry.

The list of once multi-million dollar medical machines now being dematerialized, demonetized, democratized and delocalized—that is, made into portable and even wearable sensors—could fill a textbook.
Sensor Proliferation
Sensors will not only transform healthcare and diagnostics. Any electronic device that measures a physical, quantitative value—light, acceleration, temperature, etc.— then sends that information to other devices on a network, qualifies as a sensor.
Sensors add intelligence to our appliances. But more importantly, they add hours to our lives.
Consider that in less than a decade, when you run out of coffee, your kitchen cabinet will detect a shortage (cross-referencing sensor data with your coffee-drinking habits) and order more. A blockchain-enabled smart contract will subsequently place an order, triggering an Amazon drone delivery directly to your doorstep.
And of course, your very own Butler-bot might soon transport these freshly ground beans from delivery box to cabinet, sparing you the trouble.
If advances in computing power, AI, and networks represent the center mass of the digital revolution, then today’s sensor uprising is the outer edge of that revolt.
Comprising the first part of tomorrow’s smart environment information-processing pipeline, sensors are the data-gathering apparatus that provide our computers with the information they need to act.
Case Study: The Oura Ring
Not much more than a sleek, black band, the Oura Ring is the most accurate sleep tracker on the market, thanks to its TK sensors.


The product began in 2014 at an infectious disease lab in Finland. Health researcher Petteri Lahtela noticed that many of the diseases he’d been studying, including Lyme disease, heart disease and diabetes, shared a curious overlap: all of them negatively affected sleep.
Lahtela started to wonder if all these diseases cause insomnia or if it worked the other way around. Could these conditions be alleviated or, at least, improved, by fixing sleep?
To solve that puzzle, Lahtela decided he needed data, so he turned to sensors. In 2015, driven by advances in smartphones, we saw the convergence of incredibly small and powerful batteries with incredibly small and powerful sensors.
So small and powerful, in fact, that building a whole new kind of sleep tracker might be possible.
The sensors that caught Lahtela’s fancy were a new breed of heart rate monitors, particularly given that heart rate and variability serve as excellent sleep quality indicators. Yet at the time, all such trackers on the market were riddled with issues.
Fitbit and the Apple Watch, for instance, measure blood flow in the wrist via an optical sensor. Yet the wrist’s arteries sit too far below the surface for perfect measurement, and people don’t often wear watches to bed—as smart watches can interrupt the very sleep they’re designed to measure.
Lahtela’s upgrade? The Oura ring.
Location and sampling rates are its secret weapons. Because the finger’s arteries are closer to the surface than those in the wrist, the Oura gets a far better picture of the action. Plus, while Apple and Garamond measure blood flow twice a second, and Fitbit even raises this figure to 12x/second, the Oura ring captures data at 250 times per second.
And in studies conducted by independent labs, the ring is 99 percent accurate compared to medical grade heart rate trackers, and 98 percent accurate for heart rate variability.
Twenty years ago, sensors with this level of accuracy would have cost in the millions, requiring reasonably sized data centers and tremendous overheard processing costs.
Today, the Oura costs around $300 and sits on your finger—a perfect example of sensors’ exponential growth.
Connected Devices and IoT
We are in the middle of a sensor revolution. The street name for this uprising is the “Internet of Things,” the huge mesh network of interconnected smart devices that will soon span the globe.
And it’s worth tracing the evolution of this revolution to understand how far we’ve come.
In 1989, John Romkey, one of the inventors of the transmission control protocol (TCP/IP), connected a Sunbeam toaster to the internet, making it the very first IoT device.
Ten years later, sociologist Neil Gross saw the writing on the wall and made a now famous prediction in the pages of Business Week: “In the next century, planet Earth will don an electric skin. It will use the Internet as a scaffold to support and transmit its sensations […] These will monitor cities and endangered species, the atmosphere, our ships, highways and fleets of trucks, our conversations, our bodies—even our dreams.”
A decade later in 2009, Gross’ prediction bore out: the number of devices connected to the Internet exceeded the number of people on the planet (12.5 billion devices, 6.8 billion people, or 1.84 connected devices per person).
A year later, driven primarily by the evolution of smart phones, sensor prices began to plummet. By 2015, all this progress added up to 15 billion connected devices, with researchers at Stanford predicting 50 billion by 2020.
As most of these devices contain multiple sensors—the average smart phone has about twenty—this also explains why 2020 marks the debut of what’s been called “our trillion sensor world.”
Nor will we stop there. By 2030, those same Stanford researchers estimate 500 billion connected devices. And according to Accenture, this translates into a US$14.2 trillion economy.

Hidden behind these numbers is exactly what Gross had in mind—an electric skin that registers just about every sensation on the planet.
Consider optical sensors. The first digital camera, built in 1976 by Kodak engineer Steven Sasson, was the size of a toaster oven, took twelve black-and-white images, and cost over ten thousand dollars. Today, the average camera that accompanies your smartphone shows a thousand-fold improvement in weight, cost, and resolution.
And these cameras are everywhere: in cars, drones, phones, satellites— with uncanny image resolution to boot. Already, satellites photograph the Earth down to the half-meter range. Drones shrink that to a centimeter. And the LIDAR sensors atop autonomous cars are on track to capture just about everything—gathering 1.3 million data points per second, and registering change down to the single photon level.
Implications
We see this triple trend—of plummeting size and cost, alongside mass increases in performance—everywhere.
The first commercial GPS hit shelves in 1981, weighing 53 pounds and costing $119,900. By 2010, it had shrunk to a five-dollar chip small enough to sit on your finger.
The “inertial measurement unit” that guided our early rockets was a 50-pound, $20 million device in the mid-60s. Today, the accelerometer and gyroscope in your cellphone do the same job, yet cost about four dollars and weigh less than a grain of rice.
And these trends are only going to continue. We’re moving from the world of the microscopic, to the world of the nanoscopic.
As a result, we’ve begun to see an oncoming wave of smart clothing, jewelry, glasses—the Oura ring being but one example. Soon, these sensors will migrate to our inner bodies. Alphabet’s Verily branch is working on a miniaturized continuous blood glucose monitor that could assist diabetics in everyday treatment.
Research on smart dust, a dust-mote-sized system that can sense, store, and transmit data, has been progressing for years. Today, a “mote” is the size of an apple seed. Tomorrow, at the nano-scale, they’ll float through our bloodstream, exploring one of the last great terra incognita—the interior of the human body.
We’re about to learn a whole lot more, and not just about the body. About everything. The data haul from these sensors is beyond comprehension. An autonomous car generates four terabytes a day, or a thousand feature length films’ worth of information. A commercial airliner: Forty terabytes. A smart factory: A petabyte. So what does this data haul get us? Plenty.
Doctors no longer have to rely on annual check-ups to track patient health, as they now get a blizzard of quantified-self data streaming in 24-7.
Farmers now know the moisture content in both the soil and the sky, allowing pinpoint watering for healthier crops, bigger yields and—a critical factor in the wake of climate change—far less water waste.
In business, agility has been the biggest advantage. In times of rapid change, lithe and nimble trumps slow and lumbering, every time. While knowing every available detail about one’s customers is an admitted privacy concern, it does provide organizations with an incredible level of dexterity, which may be the only way to stay in business in tomorrow’s accelerated times.
Final Thoughts
Within a decade, we will live in a world where just about anything that can be measured will be measured— all the time. It will not be your knowledge that matters, but rather the questions you ask.
It’s a world of radical transparency, where privacy concerns will take on a whole new meaning.
From the edge of space to the bottom of the ocean to the inside of your bloodstream, our world’s emerging electric skin is producing a sensorium of endlessly available information. And riding rapid advances in AI, this “skin” possesses the machine learning required to make sense of that information.
Welcome to the hyper-conscious planet.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #AR #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
How Augmented Reality (AR) will change your industry
How Augmented Reality (AR) will change your industry

Augmented Reality (AR) has already exceeded over 2,000 AR apps on over 1.4 billion active iOS devices. Even if on a rudimentary level, the technology is now permeating the consumer products space.
And in just the next four years, the International Data Corporation (IDC) forecasts AR headset production will surge 141 percent each year, reaching a whopping 32 million units by 2023.
AR will soon serve as a surgeon’s assistant, a sales agent, and an educator, personalized to your kids’ learning patterns and interests.
In this fourth installment of our five-part AR series, I’m doing a deep dive into AR’s most exciting industry applications, poised to hit the market in the next 5-10 years.
Let’s dive in.
Healthcare
(1) Surgeons and physicians:
Whether through detailed and dynamic anatomical annotations or visualized patient-specific guidance, AR will soon augment every human medical practitioner.
To start, AR is already being used as a diagnosis tool. SyncThink, recently hired by Magic Leap, has developed eye-tracking technology to diagnose concussions and balance disorders. Yet another startup, XRHealth, launched its ARHealth platform on Magic Leap to aid in rehabilitation, pain distraction, and psychological assessment.

Moreover, surgeons at the Imperial College London have used Microsoft’s HoloLens 1 in pre-operative reconstructive and plastic surgery procedures, which typically involves using CT scans to map blood vessels that supply vital nutrients during surgery.
As explained by the project’s senior researcher, Dr. Philip Pratt, “With the HoloLens, we’re now doing the same kind of [scan] and then processing the data captured to make it suitable to look at. That means we end up with a silhouette of a limb, the location of the injury, and the course of the vessels through the area, as opposed to this grayscale image of a scan and a bit more guesswork.”
Dramatically lowering associated risks, AR can even help surgeons visualize the depth of vessels and choose the optimal incision location.
And while the HoloLens 1 was only used in pre-op visualizations, Microsoft’s HoloLens 2 is on track to reach the operating table. Take Philips’ Azurion image-guided therapy platform, for instance. Built specifically for the HoloLens 2, Azurion strives to provide surgeons with real-time patient data and dynamic 3D imagery as they operate.
Moreover, AR headsets and the virtual overlays they provide will exponentially improve sharing of expertise across hospitals and medical practices. Niche medical specialists will be able to direct surgeons remotely from across the country (not to mention the other side of the planet), or even view annotated AR scans to offer their advice.
Magic Leap, in its own right, is now collaborating with German medical company Brainlab to create a 3D spatial viewer that would allow clinicians to work together in surgical procedures across disciplines.

But beyond democratizing medical expertise, AR will even provide instantaneous patient histories, gearing doctors with AI-processed information for more accurate diagnoses in a fraction of the time.
By saving physicians’ time, AR will therefore free doctors to spend a greater percentage of their day engaging in face-to-face contact with their patients, establishing trust, compassion, and an opportunity to educate healthcare consumers (rather than merely treating them).
And when it comes to digital records, doctors can simply use voice control to transcribe entire interactions and patient visits, multiplying what can be done in a day, and vastly improving the patient experience.
(2) Assistance for those with disabilities:
Today, over 3.4 million visually impaired individuals reside in the U.S. alone. But thanks to new developments in the AI-integrated smart glasses realm, associated constraints could soon fade in severity.
And new pioneers continue to enter the market, including NavCog, Horus, AIServe, and MyEye, among others. Microsoft has even begun development of a “Seeing AI” app, which translates the world into audio descriptions for the blind, as seen through a smartphone’s camera lens.

During the Reality Virtual Hackathon in January, hosted by Magic Leap at MIT, two of the top three winners catered to disabilities. CleARsite provided environment reconstruction, haptic feedback, and Soundfield Audio overlay to enhance a visually impaired individual’s interaction with the world. Meanwhile, HeAR used a Magic Leap 1 headset to translate vocals or sign language into readable text in speech bubbles in the user’s field of view. Magic Leap remains dedicated to numerous such applications, each slated to vastly improve quality of life.
(3) Biometric displays:
In biometrics, cyclist sunglasses and swimmer goggles have evolved into the perfect medium for AR health metric displays. Smart glasses like the Solos ($499) and Everysight Raptors ($599) provide cyclists with data on speed, power, and heart rate, along with navigation instructions. Meanwhile, Form goggles ($199)—just released at the end of August—show swimmers their pace, calories burned, distance, and stroke count in real-time, up to 32 feet underwater.

Accessible health data will shift off our wrists and into our fields of view, offering us personalized health recommendations and pushing our training limits alike.
Retail & Advertising
(1) Virtual shopping:
The year is 2030. Walk into any (now AI-driven, sensor-laden, and IoT-retrofitted) store, and every mannequin will be wearing a digital design customized to your preferences. Forget digging through racks of garments or hunting down your size. Cross-referencing your purchase history, gaze patterns, and current closet inventory, AIs will display tailor-made items most suitable for your wardrobe, adjusted to your individual measurements.

An app available on most Android smartphones, Google Lens is already leaping into this marketplace, allowing users to scan QR codes and objects through their smartphone cameras. Within the product, Google Lens’s Style Match feature even gives consumers the capability to identify pieces of clothing or furniture and view similar designs available online and through e-commerce platforms.
(2) Advertising:
And these mobile AR features are quickly encroaching upon ads as well.
In July, the New York Times debuted an AR ad for Netflix’s “Stranger Things,” for instance, guiding smartphone users to scan the page with their Google Lens app and experience the show’s fictional Starcourt Mall come to life.

But immersive AR advertisements of the future won’t all be unsolicited and obtrusive. Many will likely prove helpful.
As you walk down a grocery store aisle, discounts and special deals on your favorite items might populate your AR smart glasses. Or if you find yourself admiring an expensive pair of pants, your headset might suggest similar items at a lower cost, or cheaper distributors with the same product. Passing a stadium on the way to work, next weekend’s best concert ticket deals might filter through your AR suggestions—whether your personal AI intends them for your friend’s upcoming birthday or your own enjoyment.
Instead of bombarding you at every turn on a needed handheld device, ads will appear only when most relevant to your physical surroundings— or toggle them off, and have your personal AI do the product research for you.
Education & Travel
(1) Customized, continuous learning:
The convergence of today’s AI revolution with AR advancements gives us the ability to create individually customized learning environments.
Throw sensors in the mix for tracking of neural and physiological data, and students will soon be empowered to better mediate a growth mindset, and even work towards achieving a flow state (which research shows can vastly amplify learning).

Within the classroom, Magic Leap One’s Lumin operating system allows multiple wearers to share in a digital experience, such as a dissection or historical map. And from a collaborative creation standpoint, students can use Magic Leap’s CAD application to join forces on 3D designs.
In success, AR’s convergence with biometric sensors and AI will give rise to an extraordinarily different education system: one comprised of delocalized, individually customizable, responsive, and accelerated learning environments.
Continuous and learn-everywhere education will no longer be confined to the classroom. Already, numerous AR mobile apps can identify objects in a user’s visual field, instantaneously presenting relevant information. As user interface hardware undergoes a dramatic shift in the next decade, these software capabilities will only explode in development and use.
Gazing out your window at a cloud will unlock interactive information about the water cycle and climate science. Walking past an old building, you might effortlessly learn about its history dating back to the sixteenth century. I often discuss information abundance, but it is data’s accessibility that will soon drive knowledge abundance.
(2) Training:
AR will enable on-the-job training at far lower costs in almost any environment, from factories to hospitals.
Smart glasses are already beginning to guide manufacturing plant employees as they learn how to assemble new equipment. Retailers stand to decimate the time it takes to train a new employee with AR tours and product descriptions.
And already, automotive technicians can better understand the internal components of a vehicle without dismantling it. Jaguar Land Rover, for instance, has recently implemented Bosch’s Re’flekt One AR solution. Training technicians with “x-ray” vision, the AR service thereby allows them to visualize the insides of Range Rover Sport vehicles without removing their dashboards.
In healthcare, medical students will be able to practice surgeries on artificial cadavers with hyper-realistic AR displays. Not only will this allow them to rapidly iterate on their surgical skills, but AR will dramatically lower the cost and constraints of standard medical degrees and specializations.
Meanwhile, sports training in simulators will vastly improve with advanced AR headset technology. Even practicing chess or piano will be achievable with any tabletop surface, allowing us to hone real skills with virtual interfaces.
(3) Travel:
As with most tasks, AI’s convergence with AR glasses will allow us to outsource all the most difficult (and least enjoyable) decisions associated with travel, whether finding the best restaurants or well-suited local experiences.
But perhaps one of AR’s more sophisticated uses (already rolling out today) involves translation. Whether you need to decode a menu or access subtitles while conversing across a language barrier, instantaneous translation is about to improve exponentially with the rise of AI-powered AR glasses. Even today, Google Translate can already convert menu text and street signs in real time through your smartphone.
Manufacturing
As I explored last week, manufacturing presents the nearest-term frontier for AR’s commercial use. As a result, many of today’s leading headset companies—including Magic Leap, Vuzix, and Microsoft—are seeking out initial adopters and enterprise applications in the manufacturing realm.

(1) Design:
Targeting the technology for simulation purposes, Airbus launched an AR model of the MRH-90 Taipan aircraft just last year, allowing designers and engineers to view various components, potential upgrades, and electro-optical sensors before execution. Saving big on parts and overhead costs, Airbus thereby gave technicians the opportunity to make important design changes without removing their interaction with the aircraft.
(2) Supply chain optimization:
AR guidance linked to a centralized AI will also mitigate supply chain inefficiencies. Coordinating moving parts, eliminating the need to hold a scanner at each checkpoint, and directing traffic within warehouses will vastly improve workflow.
After initially implementing AR “vision picking” in 2015, leading supply company DHL recently announced it would continue to use the newest Google smart lens in warehouses across the world. Or take automotive supplier ZF, which has now rolled out use of the HoloLens in plant maintenance.

(3) Quality assurance & accessible expertise:
AR technology will also play a critical role in quality assurance, as it already does in Porsche’s assembly plant in Leipzig, Germany. Whenever manufacturers require guidance from engineers, remote assistance is effectively no longer remote, as equipment experts guide employees through their AR glasses and teach them on the job.
Transportation & Navigation
(1) Autonomous vehicles:
To start, Nvidia’s Drive platform for Level 2+ autonomous vehicles is already combining sensor fusion and perception with AR dashboard displays to alert drivers of road hazards, highlight points of interest, and provide navigation assistance.

And in our current transition phase of partially autonomous vehicles, such AR integration allows drivers to monitor conditions yet eases the burden of constant attention to the road. Along these lines, Volkswagen has already partnered with Nvidia to produce I.D. Buzz electric cars, set to run on the Drive OS by 2020. And Nvidia’s platform is fast on the move, having additionally partnered with Toyota, Uber, and Mercedes-Benz. Within just the next few years, AR displays may be commonplace in these vehicles.
(2) Navigation:

We’ve all seen (or been) that someone spinning around with their smartphone to decipher the first few steps of a digital map’s commands. But AR is already making everyday navigation intuitive and efficient.
Google Maps’ AR feature has already been demoed on Pixel phones: instead of staring at your map from a bird’s eye view, users direct their camera at the street, and superimposed directions are immediately layered virtually on top.
Not only that, but as AI identifies what you see, it instantaneously communicates with your GPS to pinpoint your location and orientation. Although a mainstream rollout date has not yet been announced, this feature will likely make it to your phone in the very near future.
Entertainment
(1) Gaming:
We got our first taste of AR’s real-world gamification in 2016, when Nintendo released Pokémon Go. And today, the gaming app has now surpassed 1 billion downloads. But by contrast to VR, AR is increasingly seen as a medium for bringing gamers together in the physical world, encouraging outdoor exploration, activity, and human connection in the process.
And in the recently exploding eSports industry, AR has the potential to turn player’s screens into live action stadiums. Just this year, the global eSports market is projected to exceed US$1.1 billion in revenue, and AR’s potential to elevate the experience will only see this number soar.
(2) Art:
Many of today’s most popular AR apps allow users to throw dinosaurs into their surroundings (Monster Park), learn how to dance (Dance Reality), or try on highly convincing virtual tattoos (InkHunter).
And as high-definition rendering becomes more commonplace, art will, too, grow more and more accessible.
Magic Leap aims to construct an entire “Magicverse” of digital layers superimposed on our physical reality. Location-based AR displays, ranging from art installations to gaming hubs, will be viewable in a shared experience across hundreds of headsets. Individuals will simply toggle between modes to access whichever version of the universe they desire. Endless opportunities to design our surroundings will arise.
Apple, in its own right, recently announced the company’s [AR]T initiative, which consists of floating digital installations. Viewable through [AR]T Viewer apps in Apple stores, these installations can also be found in [AR]T City Walks guiding users through popular cities, and [AR]T Labs, which teach participants how to use Swift Playgrounds (an iPad app) to create AR experiences.
(3) Shows:
And at the recent Siggraph Conference in Los Angeles, Magic Leap introduced an AR-theater hybrid called Mary and the Monster, wherein viewers watched a barren “diorama-like stage” come to life in AR.

Source: Venture Beat.
While audience members shared the common experience like a traditional play, individuals could also zoom in on specific actors to observe their expressions more closely.
Say goodbye to opera glasses and hello to AR headsets.
Final Thoughts
While AR headset manufacturers and mixed reality developers race to build enterprise solutions from manufacturing to transportation, AR’s use in consumer products is following close behind.
Magic Leap leads the way in developing consumer experiences we’ve long been waiting for, as the “Magicverse” of localized AR displays in shared physical spaces will reinvent our modes of connection.
And as AR-supportive hardware is now built into today’s newest smartphones, businesses have an invaluable opportunity to gamify products and immerse millions of consumers in service-related AR experiences.
Even beyond the most obvious first-order AR business cases, new industries to support the augmented world of 2030 will soon surge in market competition, whether headset hardware, data storage solutions, sensors, or holograph and projection technologies.
Jump on the bandwagon now— the future is faster than you think!

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #AR #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
The Future is Faster Than You Think
The Future is Faster Than You Think

3D printing is about to transform manufacturing as we know it, decimating waste, multiplying speed to market, and harnessing never-before-used materials.
Already forecast to hit US$15.8 billion in value by 2020, additive manufacturing products and services are projected to more than double to $35.6 billion by 2024. Just five years from today.
But not only will 3D printing turn supply chains on their head here on Earth—shifting how and who manufactures our products—but it will be the vital catalyst for making space colonies (and their infrastructure) possible.
Welcome to the 2030 era of tailor-made, rapid-fire, ultra-cheap, and zero-waste product creation… on our planet, and far beyond.
(Note: If you like this blog, share it! | LinkedIn | Facebook | Twitter | Or send your friends and family to this link to subscribe!)
3D printing on the ISS
Today, the most expensive supply chain in the known universe extends only 241 miles.
Jutting straight up from mission control down here on Earth, this resupply network extends directly to the astronauts aboard the International Space Station (or the ISS).
Yet the supply chain’s hefty expense is due almost entirely to weight. Why? It costs $10,000 per pound just to get an object out of the Earth’s gravity well. And because it takes months for that object to actually reach the Space Station, a significant portion of the ISS’s precious real estate is taken up by storage of replacement parts.
In other words, the most expensive supply chain in history leads to the most exotic junkyard in the cosmos.
The first-ever company seeking to solve these problems, Made in Space had the ambitious goal to build a 3D printer that works in zero gravity. And just a few years later, Made in Space is now in space. For this reason, on a 2018 ISS mission, when an astronaut broke his finger, the team no longer needed to order a splint from Earth and wait months for its arrival.
Instead, they flipped on their 3D printer, loaded in some feed stock, found “splint” in their blueprint archive, and created what they needed, when they needed it.
Successes like that of Made in Space represent a level of on-demand manufacturing capability unlike anything we’ve seen before.
But how did we get here…?
The original 3D printers showed up back in the 80s. They were clunky, slow, hard to program, easy to break, and worked with only one material: Plastic.
Today, these machines have colonized most of the periodic table. We can now print in over 500 different materials, in full color, in metals, rubber, plastic, glass, concrete, and even in organic materials, such as cells, leather and chocolate.
The interfaces are nearly plug-and-play simple—meaning if you can learn to use Facebook, you can probably learn to 3D print.
And what we can now print is astounding. From jet engines to apartment complexes to circuit boards to prosthetic limbs, 3D printers can fabricate enormously complex devices in ever-shorter timeframes.
Moreover, because objects are being built one layer at a time, customization requires nothing more than altering a digital file. Design complexity, what was once one of the most expensive components of the manufacturing process, now comes for free.
And in a big win for our planet, 3D printing also cleans up the process.
In comparison, traditional manufacturing is about turning more into less. Start with a big hunk of whatever, and carve, shave, and shred your way down to the desired object. Most of what you’re producing along the way is waste.
But 3D printing turns this process on its head. By building up objects one layer at a time, the process uses 10 percent of the raw materials of traditional manufacturing.
Nor is it just waste that vanishes.
The on-demand nature of 3D printers removes the need for inventory and everything that inventory requires. Other than the space required for printing materials and the printer itself, 3D printing all but erases supply chains, transportation networks, stock rooms, warehouses and all the rest.
This one development—this single exponential technology—threatens to demonetize, dematerialize and democratize the entire $12 trillion manufacturing industry.
And once again, this development was a long time coming.
Until the early 2000s, 3D printers were exceptionally pricey toys. This started to shift in 2007, when what was once a several-hundred-thousand-dollar machine became available for under $10,000.
Just one year later, the first 3D-printed objects hit the market. Housewares, jewelry, clothing, even prosthetic limbs.
Transportation was next: 2011 saw the world’s first 3D-printed car. Jet engines soon followed, and rocket engines were not far behind.
But 2017 was the year that additive manufacturing entered its disruptive phase. By then, printing speeds had increased 150-fold, the variety of materials had increased 500-fold, and printers themselves could now be purchased for under $1,000.
3D Printing Convergences
As price dropped and performance increased, convergences began to arise—and this is what moves 3D printing from a manufacturing revolution to a society-wide force for change.
Take computing, for instance. A couple years back, the Israeli company Nano Dimension brought the first commercial circuit board printer to market, a development that lets designers prototype new circuit boards in hours instead of months. Since the design of circuit boards is a brake on the speed of computer development—that is, a brake on the biggest driver of technological acceleration—this convergence doesn’t just represent a revolution in computer manufacturing; it puts the pedal to the metal on an already accelerated process.
Another convergence sits at the intersection of energy and 3D printing, wherein additive manufacturing is already making batteries, wind turbines and solar cells— three of the most expensive and important components of the renewables revolution.
And even transportation is seeing similar impacts. Engines used to be among the most complicated machines on the planet. GE’s advanced turboprop, for instance, once contained 855 individually milled components. Today, with 3D printing, it has twelve. The upside? A hundred pounds of weight reduction and a 20 percent improvement in fuel burn.
Yet another convergence involves 3D printing and biotech. The first few 3D-printed prosthetics arrived in 2010. And today, hospitals are rolling them out at scale. Just last year, for instance, a Jordanian hospital introduced a program that can fit and build a prosthetic for an amputee in only 24 hours. The price tag? Less than US$20. Meanwhile, as 3D printers can now print electronics, we’re seeing innovations like the Hero Arm: the world’s first 3D-printed, multi-grip bionic prosthetic available at non-bionic prices.
And replacement body parts are about to become replacement organs.
Back in 2002, scientists at Wake Forest University 3D-printed the first kidney capable of filtering blood and producing urine. In 2010, Organovo, a San Diego-based bioprinting outfit, created the first blood vessel. And today, San Francisco-based 3D tissue printing company Prellis Biologics is achieving record speeds in its pursuit of printed human tissue with viable capillaries. In success, these additive manufacturing breakthroughs could forever end our shortage of donor organs.
And in the realm of real estate and infrastructure, the construction industry will be downright unrecognizable within just a few years.
But a story that might best illustrate the world-changing power of 3D printing belongs to a guy named Brett Devita.
Sickened by the tent cities he saw in Haiti after the earthquake, Devita decided to find a way to use emerging technology to provide permanent shelter for people who need it most. Forming a non-profit called New Story, he raised research capital from a group of investors known only as “the Builders” and created a solar-powered 3D printer that can work in the worst environments imaginable. Greatly democratizing the field, Devita’s printer erects a 400-800 square-foot home in 48 hours at the cost of about $4,000. But these homes aren’t bunkers—they consist of nifty modern designs complete with wrap-around porches.
And in the fall of this year (2019), New Story is starting construction of the world’s first 3D-printed community—100 homes to be given or sold (using no interest, micro-repayment loans available to anyone) to people who are currently homeless.
Final Thoughts
3D printing is not a mere paradigm shift in manufacturing.
It is fundamentally democratizing access to vital resources, redefining nodes of power in contemporary supply chains, and turning wasteful production processes into closed-loop economies.
Whether a bearer of infinite organ supply or trillions of sensors, 3D printing and the production materials it unlocks will permeate every industry imaginable.
And even in some of the most barren of environments—think: lonesome planets, disaster zones, or scattered among asteroids in space—additive manufacturing is one tomorrow’s greatest conduits for converting scarcity to abundance….
Want a copy of Peter Diamandis and Steven Kotler next book? If you’ve enjoyed the book Bold or Abundance their new book should be amazingly informative. The Future is Faster Than You Think, pre-order on Amazon should be in on January 28, 2020.


Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #3DPrinting #AI #innovation #IoT #virtualreality #vr #d #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
Emerging Augmented Reality (AR) technologies are driving increased demand for innovations, 5G will provide the key to unlocking AR’s potential
Emerging Augmented Reality (AR) technologies are driving increased demand for innovations, 5G will provide the key to unlocking AR’s potential

Today, adults in the U.S. spend over nine hours a day looking at screens. That counts for more than a third of our livelihoods.
Yet even though they serve as a portal to 90 percent of our media consumption, screens continue to define and constrain how and where we consume content, and they may very soon become obsolete.
Riding new advancements in hardware and connectivity, augmented reality (AR) is set to replace these 2D interfaces, instead allowing us to see through a digital information layer. And ultimately, AR headsets will immerse us in dynamic stories, learn-everywhere education, and even gamified work tasks.
If you want to play AR Star Wars, you’re battling the Empire on your way to work, in your cubicle, cafeteria, bathroom and beyond.
We got our first taste of AR’s real-world gamification in 2016, when Nintendo released Pokemon Go. Thus began the greatest cartoon character turkey shoot in history. With 5 million daily users, 65 million monthly users, and over $2 billion in revenue, the virtual-overlaid experience remains one for the books.
In the years since, similar AR apps have exploded. Once thick and bulky, AR glasses are becoming increasingly lightweight, stylish, and unobtrusive. And over the next 15 years, AR portals will become almost unnoticeable, as hardware rapidly dematerializes.
Companies like Mojo Vision are even rumored to be developing AR contact lenses, slated to offer us heads-up display capabilities — no glasses required.
In this second installation of our five-part AR blog series, we are doing a deep dive into the various apps, headsets, and lenses on the market today, along with projected growth.
Let’s take a look…
Mobile AR
We have already begun to sample AR’s extraordinary functions through mobile (smartphone) apps. And the growth of the market is only accelerating.
Snap recently announced it will raise $1 billion in short-term debt to invest in media content, acquisitions, and AR features. Both Apple and Google are racing to deploy phones with requisite infrastructure to support hyper-realistic AR.
And in the iOS space, developers use ARKit in iPhone software, from the SE to the latest-generation X, to bring high-definition AR experiences to life. Apple CEO Tim Cook has repeatedly emphasized his belief that AR will “change the way we use technology forever.”
While recent rumors reveal the company’s AR glasses project has been discontinued, Apple’s foray in AR is far from over. Just recently, the tech giant broadcasted a large collection of job postings for AR and VR experts. And although somewhat speculative, Apple is likely waiting for the consumer market to mature before releasing its first-generation AR glasses or pivoting towards an entirely new AR hardware product.
For now, Apple seems to be promoting the extensive hardware advancements showcased by its A12 bionic chip, not to mention the variety of apps available in its App Store.
- In the productivity realm: IKEA place allows users to try out furniture in the home, experimenting with styles and sizing before ordering online. Or take Vuforia Chalk, a novel AR tool that helps customers fix appliances with real-time virtual assistance. As users direct their smartphone cameras towards troublesome appliances, remote tech support workers can draw on consumers’ screens to guide them through repair steps.
- As to the AR playground, Monster Park brings Jurassic Park dinosaurs into any landscape you desire, immersing you in a modern-day Mesozoic Era. Meanwhile, Dance Reality can guide you through detailed steps and timing of countless dance styles.
- In virtually immersive learning, BBC’s Civilisations lets you hold, spin, and view x-rays of ancient artifacts while listening to historical narrations. WWF’s Free Rivers transforms your tabletop into natural landscapes, from the Himalayas to the African Sahara, allowing you to digitally manipulate entire ecosystems to better understand how water flow affects habitats.
- Or even create your own DIY AR worlds and objects using Thyng.
Yet for Android users, options are just as varied, based on the Android software-compatible ARCore used by developers. While the recently announced Google Glass Enterprise Edition 2 aims to capture enterprise clients, Android smartphone hardware provides remarkable AR experiences for everyday consumers.
- For sheer doodling, DoodleLens (Android APP) brings your doodles to life, transforming paper drawings into 3D animated figures that you can place and manipulate in your physical environment. And even more directly, Just a Line (Android APP) allows anyone to create a 3D drawing within their physical surroundings, making space itself an endless canvas.
- Learn as you travel: Google Translate (Android APP) can now take an image of any foreign street sign, menu, or label and provide instantaneous translation in real time. And beyond Earth-bound adventures, the now open-sourced Sky Map (Android APP) guides you through constellations across the night sky.
- Even alter your own body with Inkhunter, (Android APP) which allows users to preview any potential tattoo design on their skin. Or as is familiar to most younger folks, change your look with Snapchat’s (Android APP) computer vision-derived filters, which have already reached 90 percent of 12-to 24-year-olds in the U.S.
Leading Headsets
Although the number of AR headsets breaking into the market may seem overwhelming, a few of the top contenders are now pushing the envelope in everything from wide FOV immersion to applications in enterprise.
(1) Highest Resolution
DreamGlass: Connected to a PC or Android-based smartphone, DreamWorld’s headset offers 2.5K resolution in each lens, beating out Full HD resolution screens, but in AR. Now flooded by investment, resolution improvements minimize pixel size, reducing the “screen door effect,” whereby pixel boundaries disrupt the image like a screen’s mesh. Offering unprecedented levels of hand- and head-tracking precision, the headset even features 6 degrees of freedom (i.e. axes of directional rotation).
And with a flexible software development kit (SDK), supported by Unity and Android, the device is highly accessible to developers, making it a ready candidate for countless immersive experiences. Already at $619, the DreamGlass and comparable technology are only falling in price.
(2) Best for Enterprise
Google Glass Enterprise Edition 2: In just four years (since Google’s release of the last iteration), the Google Glass has gotten a major upgrade, now geared with an 8-megapixel camera, detachable lens, vastly increased battery life, faster connection, and ultra-high-performance Snapdragon XR1 CPU. Already, the Glass has been sold to over 100 businesses, including GE, agricultural machinery manufacturer AGCO, and health record company Dignity Health.
But perhaps most remarkable are the bucks AR can make for business. Using the Glass, GE has increased productivity by 25 percent, and DHL improved its supply chain efficiency by 15 percent. While only (currently) available for businesses, the new-and-improved AR glasses stand at $999 and will continue to ride plummeting production costs.
(3) Democratized AR
Vuzix Blade: Resembling chunky Oakley sunglasses, these smart glasses are extraordinarily portable, with a built-in Android OS as well as both WiFi and Bluetooth connection. Designed for everyday consumer use (at a price point of $700), the Vuzix Blade is slowly chipping away at smartphone functionalities. For easy control of an intuitive interface, a touchpad on the device’s temples allows consumers to display everything from social media platforms and user messages to “light AR” experiences. Meanwhile, an 8MP HD camera makes your phone camera null and void, allowing users to remain immersed in their experience while digitally capturing it at the same time. All the while, built-in Alexa capabilities and vibration alerts extend users’ experience beyond pure visual stimulation.
(4) Widest Field of View (FOV)
Microsoft HoloLens 2: This newly announced headset leads the industry with a 43° x 29° FOV, more than double its (2016-released) predecessor’s capability. But this drastic increase in visual immersiveness is far from the only device improvement. For improved long-use comfort, the headset’s center of gravity now rests on the top of the head, moving away from typical front-loaded headsets.
An even more novel functionality, tiny cameras on the nose bridge verify a user’s identity by scanning the wearer’s eyes and customizing the display based on distance between pupils. Once accompanied by emotion-deducing AIs (now under development), this tracking technology could even evolve to intuitively predict a user’s desires and emotional feedback in future models. Geared with a Qualcomm 850 mobile processor and Microsoft’s own AI engine built-in, Hololens’ potential is limitless.
(5) Class A Comfort
Magic Leap One: Weighing less than 0.8 pounds, this headset provides one of the most lightweight experiences available today with a 40° x 30° FOV, just barely eclipsed by that of Microsoft’s HoloLens 2. En route to dematerialization, Magic Leap merely requires a small “Lightpack” attachment in the wearer’s pocket, connected via cable to the goggles. A handheld controller additionally contains a touchpad, haptic feedback, and six degrees of freedom motion sensing. Meanwhile, light sensors make the digital renderings even more realistic, as they reflect physical light into the viewer’s space.
Teasing AR’s future convergence with AI, Magic Leap even features a virtual human called “Mica,” which responds to a user’s emotions (detected through eye-tracking) by returning a smile or offering a friendly gesture.
Final Thoughts
As headsets plummet in price and size, AR will rapidly permeate households over the next decade.
Once we have mastered headsets and smart glasses, AR-enabled contact lenses will make our virtually enhanced world second nature.
And ultimately, BCIs will directly interface with our neural signals to provide an instantaneous, seamlessly intuitive connection, merging our minds with limitless troves of knowledge, rich human connection, and never-before-possible experiences.
While only approaching the knee of the curve, these pioneering mobile apps and novel headset technologies explored above will soon give rise to one of the most revolutionary industries yet to be seen— one that will fundamentally transform our lives.
Just remember over 120 million workers throughout the world (11.5 million in the U.S.) will need to be retrained in the next three years due to artificial intelligence, according to an IBM survey. “Upskilling” these workers will be a big challenge as workers today require more training than ever to learn new skills — 36 days versus three days in 2014, per IBM. And often skills most valued by employers (“soft skills” like communication and ethics) take more time to develop.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks

#BoardofDirectors #BoD #artificialintelligence #AI #innovation virtualreality #vr #d #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #executive #success #work #follow #leadership #corporate #office #Biotech Cleantech #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
Augmented Reality is about to add a digital intelligence layer
Augmented Reality is about to add a digital intelligence layer

Augmented Reality is about to add a digital intelligence layer to our every surrounding, transforming retail, manufacturing, education, tourism, real estate, and almost every major industry that holds up our economy today.
Just last year, the global VR/AR market hit a value of $814.7 billion, and it is only expected to continue surging at a 63 percent CAGR until 2025.
Apple’s Tim Cook has remarked, “I regard [AR] as a big idea like the smartphone […] The smartphone is for everyone. We don’t have to think the iPhone is about a certain demographic, or country, or vertical market. It’s for everyone. I think AR is that big, it’s huge.”
And as Apple, Microsoft, Alphabet, and numerous other players begin entering the AR market, we are on the cusp of witnessing a newly augmented world.
In one of the greatest technological revolutions of this century, smartphones dematerialized cameras, stereos, video game consoles, TVs, GPS systems, calculators, paper, and even matchmaking as we knew it.
AR glasses will soon perpetuate this, ultimately dematerializing the smartphone itself. We will no longer gaze into tiny, two-dimensional screens but rather see through a fully immersive, 3D interface.
While already beginning to permeate mobile applications, AR will soon migrate to headsets, and eventually reach us through contact lenses — replacing over 3 billion smartphones in use today.
I am immensely excited about this five-part AR blog series. In it, we will cover:
- Importance of AR as an emerging technology
- Leading AR hardware
- AR convergence with AI, blockchain, and sensors
- Industry-specific applications
- Broader implications of the AR Cloud
Let’s dive in!
Introducing the Augmented World
AR superimposes digital worlds onto physical environments (by contrast to VR, which completely immerses users in digital realities). In this way, AR allows users to remain engaged with their physical surroundings, serving as a visual enhancement rather than replacement.
As AR hardware costs continue to plummet — and advancements in connectivity begin enabling low-latency, high-resolution rendering — today’s AR producers are initially targeting businesses through countless enterprise applications.
And while AR headsets remain too pricey for widespread consumer adoption, distribution is fast increasing. Roughly 150,000 headsets were shipped in 2016, and this number is expected to reach 22.8 million by 2022.
Meanwhile, AR app development has skyrocketed, allowing smartphone users to sample rudimentary levels of the technology through numerous mobile applications. Already, over 1 billion people across the globe use mobile AR, and a majority of mobile AR integrations involve social media (84%) and e-commerce (41%).
Yet while well-known players like Microsoft, Apple, Alphabet, Qualcomm, Samsung, NVIDIA, and Intel have made tremendous strides, well-funded startups remain competitive.
Magic Leap, a company aiming to eliminate the screen altogether, has raised a total of $2.6 billion since its founding in 2010. With its own head-mounted virtual retinal display, Magic Leap projects a digital light field into users’ eyes to superimpose 3D computer-generated imagery over set environments, whether social avatars, news broadcasts or interactive games.
Mojo Vision, in its own right, has raised $108 million in its efforts to develop and produce an AR contact lens. Or take Samsung’s recently granted U.S. patent to develop smart lenses capable of streaming text, capturing videos, and even beaming images directly into a wearer’s eyes. Given their multi-layered lens architecture, the contacts are even designed to include a motion sensor (for eye movement tracking), hidden camera, and display unit.
And as of this writing, nearly 1,800 different AR startups populate the crowdfunding site Angel’s List.
While AR isn’t (yet) as democratized as VR, $100 will get you an entry-level Leap Motion headset, while a top-of-the-line Microsoft HoloLens 2 remains priced at $3,500. However, heads-up-displays in luxury automobiles — arguably the first AR applications to go mainstream — will soon become a standard commodity in economy models.
And as corporate partnerships with AR startups grow increasingly common, the convergence of augmented reality with sensors, networks, and IoT will transform almost every industry imaginable.
A Taste of Industry Transformations
Over the next few weeks of blogs, we will do a deeper dive into each industry, but it is worth considering some of AR’s most notable implications across a range of sectors.
In Manufacturing & Industry, AR training simulations are already beginning to teach us how to operate numerous machines and equipment, even to fly planes. Microsoft, for instance, is targeting enterprise clients with its HoloLens 2, as the AR device’s Remote Assist function allows workers to call in virtual guidance if unfamiliar problems arise in the manufacturing process.
Healthcare: AR will allow surgeons to “see inside” clogged arteries, provide precise incision guides, or flag potential risks, introducing seamless efficiency in everything from reconstructive surgeries to meticulous tumor removals. Medical students will use AR to peel back layers on virtual cadavers. And in everyday health, we will soon track nearly every health and performance metric — whether heart rate, blood pressure, or nutritional data — through AR lenses (as opposed to wearables).
Education: In our classrooms, AR will allow children (and adults alike!) to explore both virtual objects and virtual worlds. But beyond the classroom, we will have the option to employ AR as a private teacher wherever we go. Buildings will project their history into our field of view. Museums might have AR-enhanced displays. Every pond and park will double as a virtual-overlaid lesson in biology and ecology. Or teach your children the value of money with virtual budgeting and mathematical tabulations at grocery and department stores. Already, apps like Sky Map and Google Translate allow users to learn about their surroundings through smartphone camera lenses, and AR’s teaching capabilities are only on the rise.
Yet Retail & Advertising take AR’s transformative potential to a new level. Hungry and on a budget? Your smart AR contact lenses might show you all available lunch specials on the block, cross-referenced with real-time customer ratings, special deals, and your own health data for individualized recommendations. Storefront windows will morph to display your personalized clothing preferences, continuously tracked by AI, as eye-tracking technology allows your AR lenses to project every garment that grabs your attention onto your form, in your size. Smart AR advertising — if enabled — will target your every unique preference, transparently informing you of comparable, cheaper options the minute you reach for an item.
And in Entertainment, we will soon be able to toggle into imaginary realities, or even customize physical spaces with our own designs. 3D creations will become intuitive and shareable. Sports player stats will be superimposed onto live sporting events, as spectators recreate immersive stadiums with front-row seats in their own backyards. Turn on game mode, and every streetside, park, store, and neighborhood merges into a virtually overlaid game, socially interactive and interspersed with everyday life.
In Transportation, AR displays integrated in vehicle windows will allow users to access real-time information about the restaurants, stores, and landmarks they pass. Walking, biking, and driving directions will be embedded in our routes through AR. And when sitting in your autonomous vehicle-turned office on the way to work, AR will have the power to convert any vessel into a virtual haven of your choice.
A Day in the Life of 2030
Reaching for your AR-enabled glasses upon waking up, your Jarvis-like AI populates your visual field with any new updates and personalized notifications.
You begin the day with a new pancake recipe, directed seamlessly by a cooking app in your AR glasses, with ingredients tailored to new programmed dietary preferences. Glancing at your plate, your glasses inform you of the meal’s nutritional value, tracking these metrics in your health monitor.
As you need to fly cross-country today, your AI hails an autonomous shuttle to the airport. Along the way, you switch your glasses to creation mode, allowing you to populate entire swaths of the city with various art pieces your friends have created in the virtual world. Dropping a few of your own 3D designs across the city, your AR glasses even allow you to turn the vehicle floor into a virtual pond as you glide along a smart highway (equipped for electric vehicle charging).
Upon arriving at the airport, your AR glasses switch gears to navigation mode, displaying arrows that direct you seamlessly to your boarding gate.
Walking into your hotel, you activate tourist mode, offering a number of facts and relevant figures about nearby historical buildings and monuments. Toggle to restaurant mode for a look at nearby eatery reviews, tailored to the colleagues you’ll be dining with.
Winding down, you briefly scroll through some pictures captured with your glasses throughout the day, sharing them with family through an interface completely controlled via eye movements.
Welcome to the augmented world of 2030.
Final Thoughts
While enterprises are fueling initial deployment of AR headsets for employee training and professional retooling, widespread consumer adoption is fast reaching the horizon. And as hardware and connectivity skyrocket, driving down prices and democratizing access, sleek AR glasses — if not dematerialized lenses — will become an everyday given.
Advancements in cloud computing and 5G coverage are making AR products infinitely more scalable, ultra-fast, and transportable.
Yet ultimately, AR will give rise to neural architectures directly embedded through brain-computer interfaces. Our mode of interaction with the IoT will evolve from smartphone screens, to AR glasses, to contact lenses, to BCIs.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation virtualreality #vr #d #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #executive #success #work #follow #leadership #corporate #office #Biotech Cleantech #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
Smart Technology and Integration, How It’s Changing Our Lives
Smart Technology and Integration, How It’s Changing Our Lives
Each week alone, an estimated 1.3 million people move into cities, driving urbanization on an unstoppable scale.
By 2040, about two-thirds of the world’s population will be concentrated in urban centers. Over the decades ahead, 90 percent of this urban population growth is predicted to flourish across Asia and Africa.
Already, 1,000 smart city pilots are under construction or in their final urban planning stages across the globe, driving forward countless visions of the future.
As data becomes the gold of the 21st century, centralized databases and hyper-connected infrastructures will enable everything from sentient cities that respond to data inputs in real time, to smart public services that revolutionize modern governance.
Connecting countless industries — real estate, energy, sensors and networks, transportation, among others — tomorrow’s cities pose no end of creative possibilities and stand to completely transform the human experience.
In this blog, we’ll be taking a high-level tour of today’s cutting-edge urban enterprises involved in these three areas:
- Hyperconnected urban ecosystems that respond to your data
- Smart infrastructure and construction
- Self-charging green cities
Let’s dive in!
Smart Cities that Interact with Your Data
Any discussion of smart cities must also involve today’s most indispensable asset: data.
As 5G connection speeds, IoT-linked devices and sophisticated city AIs give birth to trillion-sensor economies, low latencies will soon allow vehicles to talk to each other and infrastructure systems to self-correct.
Even public transit may soon validate your identity with a mere glance in any direction, using facial recognition to charge you for individualized travel packages and distances.
As explained by Deloitte Public Sector Leader Clare Ma, “real-time information serves as the ‘eye’ for urban administration.”
In most cities today, data is fragmented across corporations, SMEs, public institutions, nonprofits, and personal databases, with little standardization.
Yet to identify and respond to urban trends, we need a way of aggregating multiple layers of data, spanning traffic flows, human movement, individual transactions, shifts in energy usage, security activity, and almost any major component of contemporary economies.
Only through real-time analysis of information flows can we leverage exponential technologies to automate public services, streamlined transit, smarter security, optimized urban planning and responsive infrastructure.
And already, cutting-edge cities across the globe are building centralized data platforms to combine different standards and extract actionable insights, from smart parking to waste management.
Take China’s Nanjing, for instance.
With sensors installed in 10,000 taxis, 7,000 buses and over 1 million private vehicles, the city aggregates daily data across both physical and virtual networks. After transmitting it to the Nanjing Information Center, experts can then analyze traffic data, send smartphone updates to commuters and ultimately create new traffic routes.
Replacing the need for capital-intensive road and public transit reconstruction, real-time data from physical transit networks allow governments to maximize value of preexisting assets, saving time and increasing productivity across millions of citizens.
But beyond traffic routing, proliferating sensors and urban IoT are giving rise to real-time monitoring of any infrastructural system.
Italy’s major rail operator Trenitalia has now installed sensors on all its trains, deriving real-time status updates on each train’s mechanical condition. Now capable of calculating maintenance predictions in advance of system failure, transit disruptions are becoming a thing of the past.
Los Angeles has embedded sensors in 4,500 miles worth of new LEDs (replacing previous streetlights). The minute one street bulb malfunctions or runs low, it can be fixed near-immediately, forming part of a proactive city model that detects glitches before they occur.
And Hangzhou, home to e-commerce giant Alibaba, has now launched a “City Brain” project, aiming to build out one of the most data-responsive cities on the planet.
With cameras and other sensors installed across the entire city, a centralized AI hub processes data on everything from road conditions to weather data to vehicular collisions and citizen health emergencies.

Overseeing a population of nearly 8 million residents, Hangzhou’s City Brain then manages traffic signals at 128 intersections (coordinating over 1,000 road signals simultaneously), tracks ambulances en-route and clears their paths to hospitals without risk of collision, directs traffic police to accidents at record rates, and even assists city officials in expedited decision-making. No more wasting time at a red light when there is obviously no cross traffic or pedestrians.
Already, the City Brain has cut ambulance and commuter traveling times by half. And as reported by China’s first AI-partnered traffic policeman Zheng Yijiong, “the City Brain can detect accidents within a second” allowing police to “arrive at [any] site [within] 5 minutes” across an urban area of over 3,000 square miles.
But beyond oversight of roads, traffic flows, collisions and the like, converging sensors and AI are now being used to monitor crowds and analyze human movement.
Companies like SenseTime now offer software to police bureaus that can not only identify live faces, individual gaits and car license plates, but even monitor crowd movement and detect unsafe pedestrian concentrations.
Some researchers have even posited the use of machine learning to predict population-level disease spread through crowd surveillance data, building actionable analyses from social media data, mass geolocation and urban sensors.
Yet aside from self-monitoring cities and urban AI ‘brains,’ what if infrastructure could heal itself on-demand. Forget sensors, connectivity and AI — enter materials science.
Self-Healing Infrastructure
The U.S. Department of Transportation estimates a $542.6 billion backlog needed for U.S. infrastructure repairs alone.
And as I’ve often said, the world’s most expensive problems are the world’s most profitable opportunities.
Enter self-healing construction materials.
First up, concrete.
In an effort to multiply the longevity of bridges, roads, and any number of infrastructural fortifications, engineers at Delft University have developed a prototype of bio-concrete that can repair its own cracks.
Mixed in with calcium lactate, the key ingredients of this novel ‘bio-concrete’ are minute capsules of limestone-producing bacteria distributed throughout any concrete structure. Only when the concrete cracks, letting in air and moisture, does the bacteria awaken.
Like clockwork, the bacteria begins feeding on surrounding calcium lactate as it produces a natural limestone sealant that can fill cracks in a mere three weeks — long before small crevices can even threaten structural integrity.
As head researcher Henk Jonkers explains, “What makes this limestone-producing bacteria so special is that they are able to survive in concrete for more than 200 years and come into play when the concrete is damaged. […] If cracks appear as a result of pressure on the concrete, the concrete will heal these cracks itself.”
Yet other researchers have sought to crack the code (no pun intended) of living concrete, testing everything from hydrogels that expand 10X or even 100X their original size when in contact with moisture, to fungal spores that grow and precipitate calcium carbonate the minute micro-cracks appear.
But bio-concrete is only the beginning of self-healing technologies.
As futurist architecture firms start printing plastic and carbon-fiber houses, engineers are tackling self-healing plastic that could change the game with economies of scale.
Plastic not only holds promise in real estate on Earth; it will also serve as a handy material in space. NASA engineers have pioneered a self-healing plastic that may prove vital in space missions, preventing habitat and ship ruptures in record speed.
The implications of self-healing materials are staggering, offering us resilient structures both on earth and in space.
One additional breakthrough worth noting involves the magic of graphene.
Perhaps among the greatest physics discoveries of the century, graphene is composed of a 2D honeycomb lattice over 200X stronger than steel, yet remains an ultra-thin one atom thick.
While yet to come down in cost, graphene unlocks an unprecedented host of possibilities, from weather-resistant and ultra-strong coatings for existing infrastructure, to multiplied infrastructural lifespans. Some have even posited graphene’s use in the construction of 30 km tall buildings.
And it doesn’t end there.
As biomaterials and novel polymers will soon allow future infrastructure to heal on its own, nano- and micro-materials are ushering in a new era of smart, super-strong and self-charging buildings.

Revolutionizing structural flexibility, carbon nanotubes are already dramatically increasing the strength-to-weight ratio of skyscrapers.
But imagine if we could engineer buildings that could charge themselves… or better yet, produce energy for entire cities, seamlessly feeding energy to the grid.
Self-Powering Cities
As exponential technologies across energy and water burst onto the scene, self-charging cities are becoming today’s testing ground for a slew of green infrastructure pilots, promising a future of self-sufficient societies.
In line with new materials, one hot pursuit surrounds the creation of commercializable solar power-generating windows.
In the past few years, several research teams have pioneered silicon nanoparticles to capture everyday light flowing through our windows. Little solar cells at the edges of windows then harvest this energy for ready use.
Scientists at Michigan State, for instance, have developed novel “solar concentrators.” Capable of being layered over any window, these solar concentrators leverage non-visible wavelengths of light — near infrared and ultraviolet — pushing them to those solar cells embedded at the edge of each window panel.
Rendered entirely invisible, such solar cells could generate energy on almost any sun-facing screen, from electronic gadgets to glass patio doors to reflective skyscrapers.
And beyond self-charging windows, countless future city pilots have staked ambitious goals for solar panel farms and renewable energy targets.
Take Dubai’s “Strategic Plan 2021,” for instance.
Touting a multi-decade Dubai Clean Energy Strategy, Dubai aims to gradually derive 75 percent of its energy from clean sources by 2050.
With plans to launch the largest single-site solar project on the planet by 2030, boasting a projected capacity of 5,000 megawatts, Dubai further aims to derive 25 percent of its energy needs from solar power in the next decade.
And in the city’s “Strategic Plan 2021,” Dubai aims to soon:
- 3D-print 25 percent of its buildings;
- Make 25 percent of transit automated and driverless;
- Install hundreds of artificial “trees,” all leveraging solar power and providing the city with free WiFi, info-mapping screens, and charging ports;
- Integrate passenger drones capable of carrying individuals to public transit systems;
- And drive forward countless designs of everything from underwater bio-desalination plants to smart meters and grids.

A global leader in green technologies and renewable energy, Dubai stands as a gleaming example that any environmental context can give rise to thriving and self-sufficient eco-powerhouses.
But Dubai is not alone, and others are quickly following suit.
Leading the pack of China’s 500 smart city pilots, Xiong’an New Area (near Beijing) aims to become a thriving economic zone powered by 100 percent clean electricity.
And just as of this December, 100 U.S. cities are committed and on their way to the same goal.
Cities as Living Organisms
As new materials forge ahead to create pliable and self-healing structures, green infrastructure technologies are exploding into a competitive marketplace.
Aided by plummeting costs, future cities will soon surround us with self-charging buildings, green city ecosystems, and urban residences that generate far more than they consume.
And as 5G communications networks, proliferating sensors and centralized AI hubs monitor and analyze every aspect of our urban environments, cities are fast becoming intelligent organisms, capable of seeing and responding to our data in real time.


Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #HR #executive #business #CXO #CEO #CFO #CIO #executive #success #work #follow #leadership #corporate #office #Biotech Cleantech #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
Imagine making fuel, plastics, and concrete out of thin air.
Imagine making fuel, plastics, and concrete out of thin air.

That’s the promise of Direct Air Capture (DAC), a technology that fundamentally disrupts our contemporary oil economy.
Mimicking what already occurs in nature, DAC essentially involves industrial photosynthesis, harnessing the power of the sun to draw carbon directly out of the atmosphere.
This captured carbon can then be turned into numerous consumer goods, spanning fuels, plastics, aggregates and concrete (as I write this blog, I’m even wearing shoes 3D-printed from carbon).
A vital component of every life form on Earth, carbon stands at the core of our manufacturing, energy, transportation, among the world’s highest-valued industries.
And in the coming 10 years, sourcing carbon out of the air will become more cost-effective than carbon sourced from the ground (oil).
By 2030, the carbon capture and utilization (CCU) industry is expected to reach $800 billion. And by 2050, that number will surge more than 4X to a $4 Trillion market, according to McKinsey.
But let’s start with the basics…
Direct Air Capture: The What and the How
Carbon capture might seem like old news, usually written off as prohibitively expensive and unrealistic.
But DAC is fast changing the rules of the game, capable of sucking massive quantities of carbon dioxide out of the air, anywhere, at any time.
First-generation CCS (Carbon Capture and Storage) used a technology called Point Source Capture to take CO2 directly from smoke stacks and pump it into the ground for permanent sequestration.
Yet this process required massive industrial plants tethered to CO2 emission points, allowing far less flexibility.
DAC, by contrast, can be deployed anywhere, completely independent of emission patterns.
This is because CO2 gets distributed evenly within the atmosphere. There is as much CO2 above Los Angeles, California as rests above the Patagonian Desert. And for the purposes of DAC, this equal distribution means decimated transportation costs.
So how does it work? While a few different techniques have been developed, the most common involves industrial-scale fans that transmit ambient air through a filter. This latter component then uses a chemic adsorbent (which holds molecules in the form of a thin film on its surface) to produce a pure, storable stream of carbon dioxide.
But beyond the value of carbon itself, DAC could serve as a negative carbon technology, helping us lock away atmospheric CO2 while birthing an abundance of material products.
Today’s Biggest Players
Companies like Global Thermostat, Carbon Engineering, and Climeworks are now on the cutting edge of DAC technologies, capturing record quantities of CO2 from the atmosphere.
Just last October (2018), a National Academy of Sciences (NAS) report even stated that DAC could be feasible enough to reach worldwide adoption in just the next 3 years. As estimated by NAS, once the price of CO2 extraction dips below $100-150 per ton of carbon, the air-captured commodity will be economically competitive with traditionally sourced oil.
Since the report’s release, DAC has gained tremendous traction. Bill Gates-backed Carbon Engineering recently closed a $68 million series C financing round and now claims it can achieve CO2 extraction at as little as $94 per ton, at scale.
Or take Swiss startup Climeworks, which has recently deployed its third DAC plant after receiving north of $35 million in funding from the Zürcher Kantonal Bank.
Yet another contender, Global Thermostat has already demonstrated that its technology can remove CO2 for a mere $120 per ton at its facility in Huntsville, Alabama. And at scale, the startup predicts it could achieve DAC for as little as $50 a ton.
Demonstrating the sheer range of use cases, Global Thermostat has now closed deals with industrial giants from Coca-Cola—which aims to use DAC to source CO2 for its carbonated beverages—to Exxon Mobile. In just the next few years, this latter oil and gas giant intends to pioneer a DAC-to-fuel business on the back of Global Thermostat’s techniques.
Iterating upon the basic method of DAC explained above, Carbon Engineering’s approach involves a potassium hydroxide solution. This reacts with CO2 to form potassium carbonate, which—in the process—removes a certain amount of carbon dioxide from the air passing over it.
While air remnants containing less CO2 are released, the final solution is then treated to separate out captured carbon dioxide.
Once carbon capture is complete, processes like DAC-derived fuels can begin.
Direct Air Capture Fuels
The know-how for converting air into fuel has been around for a hundred years or more. After all, it’s the way all plant life grows. But until now, there was no cheap and abundant source of CO2.
For millions of years, plant species have captured CO2, converting it to sugar via photosynthesis. In succession, plants have then either burned the sugar directly or converted sugars to hydrocarbon fuels via high pressure within the Earth’s surface over long periods of time.
Theoretically, this is not hard to do. The process requires two steps: first, electrolysis separates hydrogen from H2O. Secondly, the Sabatier reaction (1897) and Fischer-Tropsch process (1925) together result in bonding of the carbon molecule in CO2 to hydrogen molecules to thereby create hydrocarbon fuels— just like the ones we purchase at gas stations or use in our stoves.
Essentially, DAC uses solar (or other renewable energy sources) to capture carbon dioxide from the air, bond it with hydrogen molecules and create burnable fuels molecularly identical to natural gas and diesel.
In other words, the process mimics a battery in its method of energy storage. It takes energy from the sun and stores it in a permanently exploitable fuel source.
Very soon, we will indeed be able to make fuel out of thin air.
Imagine a world powered by carbon-neutral fuels. The advantage here, in part, is that DAC fuels use the same infrastructural elements—pipes, gas stations, and the like—that already support our modern fossil fuel economy. Yet even using legacy distribution systems, DAC eliminates the environmental toll.
Perhaps most exciting, DAC could equalize fuel costs across the globe, democratizing immediate access. Remote or oil-distant regions, which currently suffer high fuel prices given long-distance transit, will be able to source their own fuel, regardless of geography. And not only will DAC fundamentally redefine geopolitics, but it will be an economic boon to nations like Australia, no longer in need of international oil shipments.
But captured CO2-to-fuel is just one of many exciting examples of DAC’s extraordinary potential.
Commercial Use Cases Are Limitless
In just the next few decades, we are about to manufacture a significant percentage of the world’s plastics and building materials out of the air.
Take concrete, for instance. One of the most widely used materials on Earth, second only to water, concrete now accounts for a whopping 7 percent of global CO2 emissions.
Yet as it turns out, injecting CO2 into cement as it’s being manufactured strengthens the mixture and produces a far sturdier end-product. This process also permanently sequesters CO2 into cement, largely offsetting the material’s high footprint.
Up until now, however, we had no cheap and abundant source of CO2 to achieve this. Yet with current DAC technologies and soon-to-come iterations, suppliers can now produce far more robust cement at lower costs.
NRG COSIA Carbon XPRIZE finalist CarbonCure is one such enterprise. Having raised more than $9 million, the team is now developing its latest application of DAC to create carbon-neutral concrete.
Yet another XPRIZE finalist, Carbon Upcycling UCLA, utilizes CO₂ to create a product dubbed CO₂NCRETE. A low-carbon concrete-equivalent material, CO₂NCRETE™ has achieved a CO₂ footprint approximately 50 percent lower than that of traditional concrete. And the product is just as viable.
Or take Carbon Capture Machine, which can create carbon basic solids usable in a variety of applications. First, proprietary CCM technology dissolves CO2 from any source in dilute alkali and creates a building material.
Diving quickly into technicality: the carbonate solution reacts with readily and abundantly available calcium (Ca++) and magnesium (Mg++) brines to selectively precipitate CaCO3 (Precipitated Calcium Carbonate, PCC) and MgCO3·3H2O (Precipitated Magnesium Carbonate, PMC).
In success, these conversion products are carbon-negative, high-value feedstocks in great demand across countless legacy industries. PCCs, for instance, are currently used in paper-making, plastics, paints, and adhesives, while future applications in cement and concrete are now under development.
Cement PMC, on the other hand, is an entirely new product that can be cast into final shapes and thermally cured at low temperature. As a consequence, the solid undergoes spontaneous reaction bonding to form rigid solids (blocks, panels, tiles, etc.).
But beyond Earth-bound utility, DAC could hold countless vital applications in extra-planetary ventures.
With a 98 percent CO2 atmosphere, Mars could be an ideal target for DAC, not to mention an optimal source of needed commodities. To successfully colonize and establish a society on Mars, DAC could help us produce everything from fuel and food to 3D printed replacement parts and construction tools.
Even today, SpaceX’s intended Mars strategy largely relies on the conversion of CO2 into methane for rocket fuel. Meanwhile, NASA is hosting a $1 million CO2 Conversion Centennial Challenge, inviting teams to devise carbon utilization technologies that turn CO2 into sugar molecules on Mars.
Final Thoughts
Direct Air Capture will soon allow us to sequester gigatons of CO2 from the atmosphere, yielding material abundance for countless everyday products. By making CO2 a vital part of our economy, we can begin to derive incredible value from one of our principal climate change agents, currently emitted as a “waste” product.
And applications of captured carbon are near-limitless. Whether for fuel on Mars, smart city infrastructural equipment, or everyday plastic commodities, our atmosphere’s carbon reserves are free for the taking and will fundamentally transform our global energy and materials economy.
Welcome to the age of carbon-derived abundance.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #HR #executive #business #CXO #CEO #CFO #CIO #executive #success #work #follow #leadership #corporate #office #Biotech Cleantech #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributor: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
20 Questions About Boards
I’ve been asked to share my Board Document. Always feel free to reach out or refer me to your colleagues, for a Board of Directors or strategic senior executive position.
1. What is a board of directors?
A corporation, whether for-profit or nonprofit, is required to have a governing board of directors. A board of directors is made up of a group of senior advisors who oversee the activities of a company and represent its shareholders. Every public company must have a board of directors. Private companies are not required to have boards, although many of them do.
2. What is the difference between a for-profit “corporate” board and a nonprofit board?
For-profit board members often are paid; nonprofit board members usually are not. For-profit board members uniquely attend to decisions about dispersing profits to owners (stockholders) oftentimes in the form of stock equity and dividends. Nonprofit board members do not seek to maximize and disperse profits to the owners — the owners of nonprofits are members of the community. They serve in the interest of public stakeholders.
3. What does a board of directors do?
Corporate boards select, appoint, and review the performance of the chief executive and other key executives. They determine the direct compensation and incentive plan for these executives; ensure the availability of financial resources; review and approve annual budgets and company financials; and approve strategic decisions.
4. What is the role of the board’s Chairman?
The Chairman of the board manages the board’s business and acts as its facilitator and guide. Chairmen determine board composition and organization, clarify board and management responsibilities, plan and manage board committee meetings, and develop the effectiveness of the board. In many companies, CEOs serve as Chairmen; in other companies the role is separated.
5. What is the difference between the CEO and the Chairman?
A CEO is a company’s top decision maker – all other executives answer to him or her. CEOs are accountable to the board of directors for company performance. The Chairman of a company is the head of its board of directors. The board is elected by shareholders and is responsible for protecting investors’ interests, such as the company’s profitability and stability. The board selects the Chairman.
6. How many people are typically on corporate boards?
Boards typically have between 7 and 15 members, although some boards have as many as 31 members. According to a Corporate Library study the average board size is 9.2 members. Some analysts think boards should have at least seven members to satisfy the board roles and committees.
7. How do I find out how many women are on a company’s board of directors?
Companies usually list their directors in the corporate governance section of their website. You can often identify the women by their names, but if not, you can go to the company’s 10K document and read their bios.
8. What are corporate board committees?
There are four primary board committees: executive, audit, compensation, and nominating, although there may be others, depending on corporate philosophy and special circumstances relating to a company’s line of business. It’s usually recommended that the compensation and audit committees be made up of independent directors. The executive committee is a smaller group that might meet when the full board is not available. The audit committee reviews the financial statements with internal auditors and outside audit companies. The compensation committee determines the salaries and bonuses of top executives, including the board itself. The nominating committee decides the slate of directors for the shareholders to vote their approval.
9. Why are some board members considered independent and others are not?
An independent director, or outside director, is a member of a board of directors who does not work for the company. Independent directors are important because they bring diverse backgrounds to decision making and are unbiased regarding company decisions. Independent directors are paid a standard fee for each board meeting. Inside directors are members of the corporation, usually part of the corporation’s management team.
10. What are corporate bylaws and why are they important?
Corporate bylaws are rules that govern how a company operates. They state the rights and powers of shareholders, directors, and officers. If the board wishes to change bylaws, they often need to have shareholders vote for these changes.
11. What is conflict of interest?
Conflict of interest occurs when the personal or professional interests of a board member or senior executive are potentially at odds with the best interests of the corporation. Conflicts of interest often result in loss of public confidence and a damaged reputation. A conflict of interest might occur if two CEOs sit on each other’s boards.
12. What are the qualifications to be on a corporate board of directors?
Individuals who are asked to serve on a board of directors have several years of executive experience or other equivalent professional experience in key areas that are beneficial to the company. Directors must be able to read, understand, and offer suggestions and comments on financial statements. Board members should be representative of the constituents that a company serves, including ethnic diversity, gender, and age.
13. How are new board members chosen?
In a public company, directors are selected based on criteria set by the nominating committee. Most new directors are chosen for their expertise in key areas that are useful to the corporation. Sometimes, CEOs and board chairs select directors they already know. Or, they will turn to executive search firms to find qualified candidates that meet their search criteria.
14. How has the role of the board of directors evolved over the years?
Many boards used to be comprised of employees, family members, and friends. But shareholder influence and government regulation now require boards to have independent directors not associated with the company or its executive team. Today there are many shareholder resolutions requiring companies to diversify their boards, and appoint directors of different backgrounds, gender, and race.
15. What is the time commitment of a board member?
Board directors must be able to commit the time necessary to responsibly fulfill their commitment to the organization. This includes board training, analyzing financial statements, reviewing board documents before board meetings, attending board meetings, serving on committees to which they are assigned, attending meetings, and doing whatever else the company requires. Most boards meet at least four times a year and some meet monthly.
16. What are the personal and professional benefits of being on a corporate board?
Being asked to serve on a corporate board is flattering. It shows that your skills are valued outside of your own organization. Directors meet interesting people and grapple with interesting issues. Independent director are often well paid.
17. How much do board members get paid?
Corporate directors are well compensated, and compensation is often determined by the size of the company. It’s not unusual for corporate directors of large companies to be paid $100,000 or more each year they serve. They often are also granted stock options, which could become very valuable.
18. Do boards have term or age limits?
Some boards have term limits and age limits and others do not. The National Association of Corporate Directors recommends term limits of 10 to 15 years to promote turnover and obtain fresh ideas. Age limits range from 70 to 80 years old, and many companies have no limit at all. Without term or age limits it is often difficult for companies to suggest to board members that they retire or leave.
19. How do boards of directors affect people and communities?
Boards of directors guide corporate behavior. Decisions made by the boards of public companies can directly impact our daily lives. For example, a board might approve decisions to close or relocate factories or merge with other companies, which could result in loss of jobs in a community. Good companies often provide financial support to non-profit organizations in their communities.
20. Are boards required to consider diversity when electing directors?
There are no rules about board composition. But it is well recognized that diversity on boards contributes to better decision making. Last year, the Securities and Exchange Commission adopted the ruling known as “The Governance Disclosure Rule” which requires companies to consider diversity when nominating director candidates. There is no standard, however, as to what constitutes a diverse board.
Sources:
- Daniel L. Kurtz and Sarah E. Paul, Managing Conflicts of Interest: A Primer for Nonprofit Boards (BoardSource 2006). Accessed on October 23, 2010.
- McNamara, Carter. Overview of Roles and Responsibilities of Corporate Board of Directors. (Free Management Library). Accessed on October 23, 2010.
- Investopedia Staff. Evaluating The Board Of Directors. (Investopedia). Accessed on October 23, 2010.
- What are corporate bylaws and why are they important? (AllBusiness) Accessed on October 23, 2010.
- Brush, Michael. Pay soars in the boardroom. (MSN Money, 2005). Accessed on October 23, 2010.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #artificialintelligence #AI #innovation #HR #executive #business #CXO #CEO #CFO #CIO #executive #success #work #follow #leadership #corporate #office #luxury #entrepreneur #coaching #businessman #professional #aviation #excellence #development #motivation #InvestmentCapitalGrowth
Start to Ask WHO, not HOW for Successful Project Implementation
Start to Ask WHO, not HOW for Successful Project Implementation
When most entrepreneurs (including me) face a challenge, our first reaction is to ask: “how do I solve this problem.”
As an Executive Strategic Coach I teach a powerful management shortcut for success.
Don’t ask “how.” Instead, ask “who.”
This blog explores that concept. Feel free to contact me when you need a “who” to seamlessly execute a project.
Start to Ask WHO, not HOW…
How much value are you leaving on the table because you don’t have a WHO or because you are caught in the minutia of implementing a project?
As entrepreneurs, each of us has a constant stream of ideas and new projects that might add massive value — if they ever get implemented.
Now, the idea is that as soon as I come up with an idea, my sole responsibility is to ask, “Who am I going to tag in to implement this project?” It has been an absolute game-changer.
Ultimately, asking WHO, not HOW, has transformed my ability to multiplex across my constantly increasing number of business ventures and projects.
Now if an idea comes to me during a moment of overload, I can still move it forward. I’ll spend 30 minutes creating an Impact Filter (a Strategic Coach client tool) explaining why the project is important, defining measurable criteria for success, and then hand that document to the right “who” in my ecosystem.
Simple enough, right? So why are we programmed to dive right into the HOW without thinking to ask WHO?
The Entrepreneur’s Dilemma…
As Dan Sullivan explained, “Our education system plays a major role in why we ask HOW and not WHO from the get-go. With the exception of a few exceptional schools, the education system is designed to prepare people for a life of ‘HOW.’
Kids in traditional classrooms around the world are graded on HOW they solve particular problems on their own. When you leave school, you need to collaborate and delegate to thrive. But in school, they don’t call it collaborating and delegating — they call it cheating.”
The education system engrains asking HOW and discourages asking WHO.
If you want to create a massive impact, you need to overcome old habits and begin to view human capital as an abundant resource. From there, curate a strong and passionate team to support you and act as your WHOs.
By delegating the HOW to my WHOs, my productivity and my overall passion go through the roof because I can remove myself from the mental weight and obligation of unfinished projects, allowing me to focus on what I truly love to do.
A final note for this section: you can even ask WHO when you build your team — go ahead and find yourself a WHO that finds WHOs!
Digitizing and Delocalizing WHOs
Over the past two decades, we’ve seen various forms of software emerge as the WHOs that figure out HOW.
I can verbally ask my phone (through Siri) ‘what is the GDP of Guatemala’, and Google serves as the WHO that executes the research task.
Before the advent of search engines, you’d have had to go to the library and do the research to find the right book, or you would have had to instruct an employee to travel and do that research for you.
Platforms and services like Amazon, Google and Baidu are all WHOs that entrepreneurs can tap to carry out the HOW.
In a similar vein, in a world soon to be electrified with gigabit connection speeds, entrepreneurs anywhere in the world can find their WHOs anywhere else in the world.
Eventually, our ultimate WHO will be an artificial intelligence software shell (think: Jarvis) that’s always on, always listening, always watching… always there to help and be the WHO for your every HOW.
Closing Thoughts
Finding your WHOs will make your HOWs happen faster and cheaper than ever before.
At the end of the day, while it’s really important for you as a leader to be smart, driven, ethical and visionary, the only way for you to scale your impact is to build an incredible team of WHOs behind you.
Right now is the greatest time in human history to find your WHOs. What are you waiting for?

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#projectmanagement #artificialintelligence #AI #innovation #HR #executive #business #CXO #CEO #CFO #CIO #executive #success #work #follow #leadership #corporate #office #luxury #entrepreneur #coaching #businessman #professional #aviation #excellence #development Contributor: Peter Diamandis #motivation #InvestmentCapitalGrowth
The Future of 3D Printing and How it’s Changing the World
The Future of 3D Printing and How it’s Changing the World
3D printing (additive or augmented manufacturing) translates digital files into three-dimensional objects by layering material in a process known as additive manufacturing. Printheads release matter in precise orientations that can produce complex structures, ranging from jewelry to three-story homes.
250 Materials: Current 3D printers can produce functional part- and full-color objects from over 250 different materials, including metals, plastics, ceramics, glass, rubber, leather, stem cells, and even chocolate.
100x Faster: More recently, groundbreaking stereolithography methods have succeeded in producing complex shapes at up to 100 times the speed of traditional 3D printers. Building from a bed of photoreactive liquid resin, the application of different light wavelengths has been found to selectively harden the resin as it’s released and thereby achieve a continuous print. Say goodbye to incremental layering!
90% Material Efficient: Beyond rapid and high-resolution production, additive manufacturing poses extraordinary second-order implications. Promising decimated economic and environmental costs, 3D printing eliminates tremendous amounts of waste, as raw material requirements are reduced by as much as 90 percent.
3D printing further unlocks opportunities for mass customization, democratized production, and systematic perfection.
Avi Reichental’s Top Predictions:
My friend Peter Diamandis, good friend Avi Reichental is the “go-to expert” in augmented manufacturing and the Founder and CEO of XponentialWorks, an advisory, venture investment and incubation ecosystem company that aims to monetize exponential tech innovation and business model disruption.
For 12 years, Reichental served as the CEO of 3D Systems, the largest publicly traded 3D printing company in the world. An early additive manufacturing pioneer.
By 2024, Reichental predicts:
- 50 percent of all manufacturing companies will have 3D printing operations in production;
- 40 percent of all surgeons will practice with 3D models;
- 50 percent of all consumer businesses will have revenue-bearing 3D printing operations.
But it doesn’t stop there. Already, major international breakthroughs in additive manufacturing are accelerating these trends and birthing new convergent applications.
The Next 5 Printing Breakthroughs (2019-2024)
- 3D printing speeds are slated to increase by 50X – 100X.
3D printing rates have typically been limited by (1) how much force a printhead can apply, (2) how fast a printer can heat the material to induce flow, and (3) how quickly the printhead itself can move. In a new feat, however, the MIT Laboratory for Manufacturing and Productivity recently created a printer 10 times faster than traditional desktop models and three times faster than a $100,000 industrial-scale system. Achieving record speeds, the MIT team printed a helical bevel gear in a mere 10 minutes, and a pair of eyeglass frames in only 3.6. The acceleration of 3D printing will revolutionize nearly every industry, from retail to manufacturing.
2. Sustainable, affordable, 3D-printed neighborhoods are launching.
The construction and real estate industries will experience disruption at monumental scales, as 3D-printed homes offer cheaper, environmentally-friendly alternatives to traditional housing. 3D-printed homes appeared for the first time last year in the Dutch city of Eindhoven, where a shortage of bricklayers increased the demand for this technology. The futuristic buildings waste less cement, cutting costs and resources. In the future, home printers will incorporate infrastructure including drainage pipes and even potentially smart sensors, rendering a fully integrated living experience. And more recently, a startup called NewStory was able to build 100 homes in 8 months for about $6,000 each.
3. Convincing and delicious 3D-printed steaks and burgers in fine restaurants on Earth and in space
3D-printed meat using plant-based proteins will provide a more sustainable solution to feeding the world’s growing population. Livestock produces 14.5 to 18 percent of global human-induced greenhouse gas emissions. Yet 3D-printed meat can provide the same satisfaction of meat consumption without the harmful environmental effects. Over the next five years, costs will lower and textures will improve. Israeli company Chef-it and Giuseppe Scionti’s NovaMeat are already making progress
4. Metal 3D printers will overtake plastics.
Prepare for the emergence of 3D-printed jewelry, car and airplane parts, kitchenware, and prototypes. 3D metal printers will not only eliminate waste in manufacturing, but also create more lightweight parts — a development especially pertinent to aircraft construction. This technology will grow increasingly available at the consumer level as well, providing more flexibility in product design than traditional plastic printers. Biodegradable cellulose may also overcome plastics in 3D printers of the future, as the MIT Laboratory for Manufacturing and Productivity has demonstrated with its 3D-printed antimicrobial surgical forceps. The mechanically robust, chemically versatile material is just one example of the endless possibilities of 3D-printed materials beyond plastics.
5. “Hey” will be the most frequently used command in design engineering.
“Hey Cogni, design me a new pair of shoes, 8.5 Medium, with load bearing for my weight,” is a phrase Reichental anticipates using in the next five years. Smart 3D printers with natural language processing, AI-powered generative design and customization abilities will allow for seamless design engineering. “Think of the complete fashion industry that doesn’t have cutting and sizing and waste and that can create bespoke garments, shoes, belts, accessories, food,” Reichental notes. “There is not going to be a single industry that will be spared by the next wave of additive and generative design.”
Final Thoughts…
As new methods and materials continue to spring up, how will you integrate 3D printing into your own business in the coming years? What new ventures will you build around these emerging applications?
Keep these thoughts in mind as we explore AI next week, another catalyzing technology that will only enhance the “Hey” demand functionality of 3D printers and many more devices.
Convergence leads us to transformative breakthroughs… and the human brain still beats computers when it comes to drawing these sorts of connections. Leverage that power, and what else becomes possible?

Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
Contributor: Peter Diamandis
Training and Retooling a Dynamic Workforce Using AR and VR
Training and Retooling a Dynamic Workforce Using AR and VR

As I often tell my clients, people generally remember only 10 percent of what we see, 20 percent of what we hear, and 30 percent of what we read…. But over a staggering 90 percent of what we do or experience.
By introducing gamification, immersive testing activities, and visually rich sensory environments, adult literacy platforms have a winning chance at scalability, retention and user persistence.
Beyond literacy, however, virtual and augmented reality have already begun disrupting the professional training market.
As projected by ABI Research, the enterprise VR training market is on track to exceed $6.3 billion in value by 2022.
Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.
Then in September of last year, Walmart committed to a 17,000-headset order of the Oculus Go to equip every U.S. Supercenter, neighborhood market, and discount store with VR-based employee training.
In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical six-year aircraft design process into the course of six months, turning physical mockups into CAD-designed virtual replicas.
But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real-time.
And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.
Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.
Learn Anything, Anytime, at Any Age
When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.
Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.But perhaps most urgently, Virtual Reality will offer an immediate solution to today’s constant industry turnover and large-scale re-education demands.
VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.
Want to become an electric, autonomous vehicle mechanic at age 44? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.
Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.
As tomorrow’s career model shifts from a “one-and-done graduate degree” to continuous lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to try their hand at a new industry.
Whether in pursuit of fundamental life skills, professional training, linguistic competence or specialized retooling, users of all ages, career paths, income brackets and goals are now encouraged to be students, no longer condemned to stagnancy.
As VR and artificial intelligence converge with demonetized mobile connectivity, we are finally witnessing an era in which no one will be left behind.
HR #leadership #business #CXO #CEO #CFO #Entrepreneur #WSJ #VC #socialmedia #Diversity #BigData #CorpGov #elearning #Marketing #Periscope #Recruiting #technology #startup #HRTech #Recruitment #sales #Healthcare #cloud #work
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
Contributor: Peter Diamandis
Bringing artificial intelligence into your organization
Bringing artificial intelligence into your organization

The goal is to help you think about the specific benefits of artificial intelligence and the areas you can consider automating, in your organization or area of responsibility. Here are examples of successfully deployed artificial intelligence applications. When you need help reach out to me, my contact information is on the bottom of this post.
AI tool helps companies detect expense account fraud.
Employers across a range of industries are using artificial intelligence in a bid to curb questionable write-offs hidden within employee expense reports, writes Angus Loten for WSJ Pro.
The cost of fraud. The Association of Certified Fraud Examiners, in a report last year, analyzed nearly 2,700 global employee-expense fraud cases detected over the previous year that resulted in $7 billion in losses.

AI-based fraud detection. AppZen offers an auditing tool that works with popular expense-management software packages such as SAP SE’s Concur or Chrome River Technologies Inc.‘s Expense tool. AppZen can scour 100% of employee expense reports, according to the company. The tool’s capabilities include computer vision that is able to read submitted receipts, deep learning that leverages training data to account for nuances or identify anomalies, and semantic analysis to organize objects and relationships, such as currencies, taxes and spend types.AI can speed, improve audits. Manual audits typically rely on only a random sampling of less than 10% of expense reports, allowing many erroneous or fraudulent claims to slip through undetected, says Anant Kale, AppZen’s chief executive. And while manual audits can take days or even weeks to complete, AppZen’s automated review takes only a few minutes to flag questionable items, the company says. These can include minor violations—such as accidental double entries for the same expense reported by separate employees, out-of-policy hotel mini-bar purchases or unapproved upgrades to first-class airline seats—to cases where outright fraud may be occurring.

Business Transformation

Foot Locker’s game plan to win over sneakerheads. Foot Locker Inc., spurred by growing market pressure to offer a higher degree of personalization and on-demand services, is aiming to integrate and gather data from across its operations—everything from website clicks to delivery preferences—and then apply algorithms to the data to quickly and accurately glean market intelligence, often in real time.
To do all of this, Pawan Verma, chief information and customer connectivity officer at the New York-based sports footwear retailer, has boosted the company’s tech staff roughly 30% over the past three years, while creating separate teams that work on data, apps, interfaces between apps and operating systems, artificial intelligence, augmented reality and machine learning. In an interview with WSJ Pro’s Angus Loten, Mr. Verma spoke about the challenges of turning a 45-year-old shoe retailer into an agile, tech-driven venture for Gen Z “sneaker freaks” and working with data and artificial intelligence.
WSJ: What are your biggest challenges working with data, AI and emerging digital capabilities?Mr. Verma: There are several areas, but a key one is around security. We are collecting billions of events and using machine-learning software to find a signal from noise. For example, when we have a product launch, such as Nike Air Force or Jordan Retro, billions of bots mimicking customers will try to render our websites and mobile apps useless by staging distributed-denial-of-service attacks on our internal and cloud infrastructure. This can drive customers away from the products they want and impact the social currency of our brand. We created tools, with some vendor partnerships, that deflect bot traffic and protect the site.
Robots
Using robots to comfort the lonely. Sue Karp, who was forced to retire early by a stroke and now lives alone, begins every day by greeting her robot companion, ElliQ. The robot greets her back. “I’ve got dogs, but they don’t exactly come up and say ‘Good morning’ in English,” says Ms. Karp.
Robots pals. Intuition Robotics’ ElliQ can ease senior loneliness, reports the WSJ’s Christopher Mims. Studies have found that loneliness is worse for health than obesity or inactivity, and is as lethal as smoking 15 cigarettes a day. It’s also an epidemic: A recent study from Cigna Corp. found that about half of Americans are lonely.
What ElliQ can do. ElliQ consists of a tablet, a pair of cameras and a small robot head on a post, capable of basic gestures like leaning in to indicate interest and leaning back to signal disengagement. ElliQ can also help its owner connect to family members. Through an app, ElliQ will prompt children and grandchildren to start video chats with their relative, send notes and links, and share photos.
Human-like responses. Unlike Amazon.com Inc.’s Alexa or similar voice-activated assistants, ElliQ is capable of spontaneous communication, has a wide variety of responses and behaves unpredictably. Its creators say this is essential to making it feel, if not alive, then at least present. It uses what its creators call cognitive AI to know when to interrupt with a suggestion—“Take your medicine”—and when to stay quiet, such as when a person has a visitor.Medicare Advantage might cover ElliQ. The robot is undergoing a trial with 100 participants conducted by researchers from Baycrest Health Sciences hospital in Toronto and the University of California San Francisco, at retirement communities in Palo Alto and Toronto, in part to verify that ElliQ alleviates feelings of loneliness. If so, the robot might be eligible for coverage under Medicare Advantage.
Human Capital
HR turns to artificial intelligence to speed recruiting. Human-resource departments are increasing turning to AI technologies that can help reduce the time to fill open positions, reports the Financial Times. Among the new tools:
• Machine learning devices that can go through huge numbers of applications to find candidates who match an employer’s needs.
• Chatbots that can answer candidate questions and help screen early-stage candidates.
• Video systems that can be used to interview candidates and can help determine if a recruit is confident or passionate and issues.
While some HR tech firms claim their tools are free of bias, that hasn’t proven to always be the case. The systems also need to be trained to effectively screen job candidates. And then there’s the human tendency to overuse new tech tools, which could lead HR to add new steps to their existing processes and extend the hiring process.
Work in the age of AI. Employees and employers have a different perspective on how AI will change the workplace, according to a report in the MIT Sloan Management Review. Workers appear ready to embrace the changes that are coming. More than 60% of workers, according to an Accenture study, have a positive view of the impact of AI on their work. Business leaders, on the other hand, believe that only about one-quarter of their workforce is prepared for AI adoption.
Come together. But common ground can be found. It begins with senior executives seeking clarity around talent gaps and figuring out which skills their workers need. From there, execs should look at how to advance those skills for human-AI collaboration.A different way to view the world. This calls for a new way of looking at business. First, employers and employees must show each other that they’re willing to adapt to a workplace built around people and intelligent machines. Second, worker education needs to embrace smart technologies to speed learning, expand thinking and bring out latent intelligence. And third, both parties must be motivated to learn and adapt.
#artificial intelligence #AI #innovation #HR #executive #business #CXO #CEOo #executive #success #work #follow #leadership #travel #corporate #office #luxury #entrepreneur #coaching #businessman #professional #aviation #excellence #development #motivation
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
Contributor: Peter Diamandis
Next Data-Driven Healthtech Revolution System
Next Data-Driven Healthtech Revolution System

Increasing your healthspan (i.e. making 100 years old the new 60) will depend to a large degree on artificial intelligence.
Health Nucleus: Transforming ‘Sick Care’ to Healthcare
Much of today’s healthcare system is actually sick care.
Most of us assume that we’re perfectly healthy, with nothing going on inside our bodies, until the day we travel to the hospital writhing in pain only to discover a serious or life-threatening condition.
Chances are that your ailment didn’t materialize that morning; rather, it’s been growing or developing for some time. You simply weren’t aware of it.
At that point, once you’re diagnosed as “sick,” our medical system engages to take care of you.
What if, instead of this retrospective and reactive approach, you were constantly monitored, so that you could know the moment anything was out of whack?
Better yet, what if you more closely monitored those aspects of your body that your gene sequence predicted might cause you difficulty? Think: your heart, your kidney, your breasts.
Such a system becomes personalized, predictive and possibly preventative.
This is the mission of the Health Nucleus platform built by Human Longevity, Inc. (HLI).
While not continuous — that will come later, with the next generation of wearable and implantable sensors — the Health Nucleus was designed to ‘digitize’ you once per year to help you determine whether anything is going on inside your body that requires immediate attention.
The Health Nucleus visit provides you with the following tests during a half-day visit:
- Whole genome sequencing (30x coverage)
- Whole body (non-contrast) MRI
- Brain magnetic resonance imaging/angiography (MRI/MRA)
- CT (computed tomography) of the heart and lungs
- Coronary artery calcium scoring
- Electrocardiogram
- Echocardiogram
- Continuous cardiac monitoring
- Clinical laboratory tests and metabolomics
In late 2018, HLI published the results of the first 1,190 clients through the Health Nucleus.
The results were eye-opening — especially since these patients were all financially well-off, and already had access to the best doctors.
Following are the physiological and genomic findings in these clients who self-selected to undergo evaluation at HLI’s Health Nucleus.
Physiological Findings [TG]
- 2 percent had previously unknown tumors detected by MRI
- 2.5 percent had previously undetected aneurysms detected by MRI
- 8 percent had cardiac arrhythmia found on cardiac rhythm monitoring, not previously known
- 9 percent had moderate-severe coronary artery disease risk, not previously known
- 16 percent discovered previously unknown cardiac structure/function abnormalities
- 30 percent had elevated liver fat, not previously known
Genomic Findings [TG]
- 24 percent of clients uncovered a rare (unknown) genetic mutation found on WGS
- 63 percent of clients had a rare genetic mutation with a corresponding phenotypic finding
In summary, HLI’s published results found that 14.4 percent of clients had significant findings that are actionable, requiring immediate or near-term follow-up and intervention.
Long-term value findings were found in 40 percent of the clients we screened.
Long-term clinical findings include discoveries that require medical attention or monitoring but are not immediately life-threatening.
The bottom line: most people truly don’t know their actual state of health.
The ability to take a fully digital deep dive into your health status at least once per year will enable you to detect disease at Stage 0 or Stage 1, when it is most curable.
Sensors, Wearables and Nanobots
Wearables, connected devices and quantified self apps will allow us to continuously collect enormous amounts of useful health information.
Wearables like the Quanttus wristband and Vital Connect can transmit your electrocardiogram data, vital signs, posture and stress levels anywhere on the planet.
In April 2017, Peter Diamandis and his team granted $2.5 million in prize money to the winning team in the Qualcomm Tricorder XPRIZE, Final Frontier Medical Devices.
Using a group of noninvasive sensors that collect data on vital signs, body chemistry and biological functions, Final Frontier integrates this data in their powerful, AI-based DxtER diagnostic engine for rapid, high-precision assessments.
Their engine combines learnings from clinical emergency medicine and data analysis from actual patients.
Google is developing a full range of internal and external sensors (e.g. smart contact lenses) that can monitor the wearer’s vitals, ranging from blood sugar levels to blood chemistry.
In September 2018, Apple announced its Series 4 Apple Watch, including an FDA-approved mobile, on-the-fly ECG.
Granted its first FDA approval, Apple appears to be moving deeper into the sensing healthcare market.
Further, Apple is reportedly now developing sensors that can non-invasively monitor blood sugar levels in real time for diabetic treatment. IoT-connected sensors are also entering the world of prescription drugs.
Last year, the FDA approved the first sensor-embedded pill, Abilify MyCite.
This new class of digital pills can now communicate medication data to a user-controlled app, to which doctors may be granted access for remote monitoring.
Perhaps what is most impressive about the next generation of wearables and implantables is the density of sensors, processing, networking and battery capability that we can now cheaply and compactly integrate.
Take the second-generation OURA ring, for example, which focuses on sleep measurement and management.
The OURA ring looks like a slightly thick wedding band, yet contains an impressive array of sensors and capabilities, including:
- 2 infrared LED
- 1 infrared sensor
- 3 temperature sensors
- 1 accelerometer
- a 6-axis gyro
- a curved battery with a 7-day life
- the memory, processing and transmission capability required to connect with your smartphone
Disrupting Medical Imaging Hardware
In 2018, we saw lab breakthroughs that will drive the cost of an ultrasound sensor to below $100, in a packaging smaller than most bandages, powered by a smartphone.
Dramatically disrupting ultrasound is just the beginning.
Nanobots & Nanonetworks
While wearables have long been able to track and transmit our steps, heart rate and other health data, smart nanobots and ingestible sensors will soon be able to monitor countless new parameters and even help diagnose disease.
Some of the most exciting breakthroughs in smart nanotechnology from the past year include:
Researchers from the École polytechnique fédérale de Lausanne (EPFL) and the Swiss Federal Institute of Technology in Zurich (ETH Zurich) demonstrated artificial microrobots that can swim and navigate through different fluids, independent of additional sensors, electronics or power transmission.
Researchers at the University of Chicago proposed specific arrangements of DNA-based molecular logic gates to capture the information contained in the temporal portion of our cells’ communication mechanisms. Accessing the otherwise-lost time-dependent information of these cellular signals is akin to knowing the tune of a song, rather than solely the lyrics.
MIT researchers built micron-scale robots able to sense, record, and store information about their environment. These tiny robots, about 100 micrometers in diameter (approximately the size of a human egg cell), can also carry out preprogrammed computational tasks Engineers at University of California, San Diego developed ultrasound-powered nanorobots that swim efficiently through your blood, removing harmful bacteria and the toxins they produce.
But it doesn’t stop there.
As nanosensor and nanonetworking capabilities develop, these tiny bots may soon communicate with each other, enabling the targeted delivery of drugs and autonomous corrective action.
Mobile Health
The OURA ring and the Series 4 Apple Watch are just the tip of the spear when it comes to our future of mobile health. This field, predicted to become a $102 billion market by 2022, puts an on-demand virtual doctor in your back pocket.
Step aside, WebMD.
In true exponential technology fashion, mobile device penetration has increased dramatically, while image recognition error rates and sensor costs have sharply declined.
As a result, AI-powered medical chatbots are flooding the market; diagnostic apps can identify anything from a rash to diabetic retinopathy; and with the advent of global connectivity, mHealth platforms enable real-time health data collection, transmission and remote diagnosis by medical professionals.
Already available to residents across North London, Babylon Health offers immediate medical advice through AI-powered chatbots and video consultations with doctors via its app.
Babylon now aims to build up its AI for advanced diagnostics and even prescription. Others, like Woebot, take on mental health, using Cognitive Behavioral Therapy in communications over Facebook Messenger with patients suffering from depression.
In addition to phone apps and add-ons that test for fertility or autism, the now-FDA-approved Clarius L7 Linear Array Ultrasound Scanner can connect directly to iOS and Android devices and perform wireless ultrasounds at a moment’s notice.
Next, Healthy.io, an Israeli startup, uses your smartphone and computer vision to analyze traditional urine test strips — all you need to do is take a few photos.
With mHealth platforms like ClickMedix, which connects remotely located patients to medical providers through real-time health data collection and transmission, what’s to stop us from delivering needed treatments through drone delivery or robotic telesurgery?
Welcome to the age of smartphone-as-a-medical-device.
Conclusion
With these DIY data collection and diagnostic tools, we save on transportation costs (time and money), and time bottlenecks.
No longer will you need to wait for your urine or blood results to go through the current information chain: samples sent to the lab, analyzed by a technician, results interpreted by your doctor, and only then relayed to you.
Just like the “sage-on-the-stage” issue with today’s education system, healthcare has a “doctor-on-the-dais” problem.
Current medical procedures are too complicated and expensive for a layperson to perform and analyze on their own. The coming abundance of healthcare data promises to transform how we approach healthcare, putting the power of exponential technologies in the patient’s hands and revolutionizing how we live.

Please keep me in mind as your life coach, openings for senior executive engagements, and board openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks MedTech #pharma #innovation #HealthTech #biotech #biotechnology #science #biology #research #scientist #BoD #CEO #VC #WSJ #INC500 Contributor: Peter Diamandis |