Let’s get you educated on the Sensors Explosion & the Rise of IoT

“Hey Google, how’s my health this morning?”
“One moment,” says your digital assistant.
It takes thirty seconds for the full diagnostic to run, as the system deploys dozens of sensors capturing gigabytes of data.
Smart sensors in toothbrush and toilet, wearables in bedding and clothing, implantables inside your body—a mobile health suite with a 360-degree view of your system.

“Your microbiome looks perfect,” Google tells you. “Also, blood glucose levels are good, vitamin levels fine, but an increased core temperature and IgE levels…”
“Google—in plain English?”
“You’ve got a virus.”
“A what?”
“I ran through your last forty-eight hours of meetings. It seems like you picked it up Monday, at Jonah’s birthday party. I’d like to run additional diagnostics. Would you mind using the….?”
As the Internet of Things catapults to new heights, Google is developing a full range of internal and external sensors, monitoring everything from blood sugar to blood chemistry.

The list of once multi-million dollar medical machines now being dematerialized, demonetized, democratized and delocalized—that is, made into portable and even wearable sensors—could fill a textbook.
Sensor Proliferation
Sensors will not only transform healthcare and diagnostics. Any electronic device that measures a physical, quantitative value—light, acceleration, temperature, etc.— then sends that information to other devices on a network, qualifies as a sensor.
Sensors add intelligence to our appliances. But more importantly, they add hours to our lives.
Consider that in less than a decade, when you run out of coffee, your kitchen cabinet will detect a shortage (cross-referencing sensor data with your coffee-drinking habits) and order more. A blockchain-enabled smart contract will subsequently place an order, triggering an Amazon drone delivery directly to your doorstep.
And of course, your very own Butler-bot might soon transport these freshly ground beans from delivery box to cabinet, sparing you the trouble.
If advances in computing power, AI, and networks represent the center mass of the digital revolution, then today’s sensor uprising is the outer edge of that revolt.
Comprising the first part of tomorrow’s smart environment information-processing pipeline, sensors are the data-gathering apparatus that provide our computers with the information they need to act.
Case Study: The Oura Ring
Not much more than a sleek, black band, the Oura Ring is the most accurate sleep tracker on the market, thanks to its TK sensors.


The product began in 2014 at an infectious disease lab in Finland. Health researcher Petteri Lahtela noticed that many of the diseases he’d been studying, including Lyme disease, heart disease and diabetes, shared a curious overlap: all of them negatively affected sleep.
Lahtela started to wonder if all these diseases cause insomnia or if it worked the other way around. Could these conditions be alleviated or, at least, improved, by fixing sleep?
To solve that puzzle, Lahtela decided he needed data, so he turned to sensors. In 2015, driven by advances in smartphones, we saw the convergence of incredibly small and powerful batteries with incredibly small and powerful sensors.
So small and powerful, in fact, that building a whole new kind of sleep tracker might be possible.
The sensors that caught Lahtela’s fancy were a new breed of heart rate monitors, particularly given that heart rate and variability serve as excellent sleep quality indicators. Yet at the time, all such trackers on the market were riddled with issues.
Fitbit and the Apple Watch, for instance, measure blood flow in the wrist via an optical sensor. Yet the wrist’s arteries sit too far below the surface for perfect measurement, and people don’t often wear watches to bed—as smart watches can interrupt the very sleep they’re designed to measure.
Lahtela’s upgrade? The Oura ring.
Location and sampling rates are its secret weapons. Because the finger’s arteries are closer to the surface than those in the wrist, the Oura gets a far better picture of the action. Plus, while Apple and Garamond measure blood flow twice a second, and Fitbit even raises this figure to 12x/second, the Oura ring captures data at 250 times per second.
And in studies conducted by independent labs, the ring is 99 percent accurate compared to medical grade heart rate trackers, and 98 percent accurate for heart rate variability.
Twenty years ago, sensors with this level of accuracy would have cost in the millions, requiring reasonably sized data centers and tremendous overheard processing costs.
Today, the Oura costs around $300 and sits on your finger—a perfect example of sensors’ exponential growth.
Connected Devices and IoT
We are in the middle of a sensor revolution. The street name for this uprising is the “Internet of Things,” the huge mesh network of interconnected smart devices that will soon span the globe.
And it’s worth tracing the evolution of this revolution to understand how far we’ve come.
In 1989, John Romkey, one of the inventors of the transmission control protocol (TCP/IP), connected a Sunbeam toaster to the internet, making it the very first IoT device.
Ten years later, sociologist Neil Gross saw the writing on the wall and made a now famous prediction in the pages of Business Week: “In the next century, planet Earth will don an electric skin. It will use the Internet as a scaffold to support and transmit its sensations […] These will monitor cities and endangered species, the atmosphere, our ships, highways and fleets of trucks, our conversations, our bodies—even our dreams.”
A decade later in 2009, Gross’ prediction bore out: the number of devices connected to the Internet exceeded the number of people on the planet (12.5 billion devices, 6.8 billion people, or 1.84 connected devices per person).
A year later, driven primarily by the evolution of smart phones, sensor prices began to plummet. By 2015, all this progress added up to 15 billion connected devices, with researchers at Stanford predicting 50 billion by 2020.
As most of these devices contain multiple sensors—the average smart phone has about twenty—this also explains why 2020 marks the debut of what’s been called “our trillion sensor world.”
Nor will we stop there. By 2030, those same Stanford researchers estimate 500 billion connected devices. And according to Accenture, this translates into a US$14.2 trillion economy.

Hidden behind these numbers is exactly what Gross had in mind—an electric skin that registers just about every sensation on the planet.
Consider optical sensors. The first digital camera, built in 1976 by Kodak engineer Steven Sasson, was the size of a toaster oven, took twelve black-and-white images, and cost over ten thousand dollars. Today, the average camera that accompanies your smartphone shows a thousand-fold improvement in weight, cost, and resolution.
And these cameras are everywhere: in cars, drones, phones, satellites— with uncanny image resolution to boot. Already, satellites photograph the Earth down to the half-meter range. Drones shrink that to a centimeter. And the LIDAR sensors atop autonomous cars are on track to capture just about everything—gathering 1.3 million data points per second, and registering change down to the single photon level.
Implications
We see this triple trend—of plummeting size and cost, alongside mass increases in performance—everywhere.
The first commercial GPS hit shelves in 1981, weighing 53 pounds and costing $119,900. By 2010, it had shrunk to a five-dollar chip small enough to sit on your finger.
The “inertial measurement unit” that guided our early rockets was a 50-pound, $20 million device in the mid-60s. Today, the accelerometer and gyroscope in your cellphone do the same job, yet cost about four dollars and weigh less than a grain of rice.
And these trends are only going to continue. We’re moving from the world of the microscopic, to the world of the nanoscopic.
As a result, we’ve begun to see an oncoming wave of smart clothing, jewelry, glasses—the Oura ring being but one example. Soon, these sensors will migrate to our inner bodies. Alphabet’s Verily branch is working on a miniaturized continuous blood glucose monitor that could assist diabetics in everyday treatment.
Research on smart dust, a dust-mote-sized system that can sense, store, and transmit data, has been progressing for years. Today, a “mote” is the size of an apple seed. Tomorrow, at the nano-scale, they’ll float through our bloodstream, exploring one of the last great terra incognita—the interior of the human body.
We’re about to learn a whole lot more, and not just about the body. About everything. The data haul from these sensors is beyond comprehension. An autonomous car generates four terabytes a day, or a thousand feature length films’ worth of information. A commercial airliner: Forty terabytes. A smart factory: A petabyte. So what does this data haul get us? Plenty.
Doctors no longer have to rely on annual check-ups to track patient health, as they now get a blizzard of quantified-self data streaming in 24-7.
Farmers now know the moisture content in both the soil and the sky, allowing pinpoint watering for healthier crops, bigger yields and—a critical factor in the wake of climate change—far less water waste.
In business, agility has been the biggest advantage. In times of rapid change, lithe and nimble trumps slow and lumbering, every time. While knowing every available detail about one’s customers is an admitted privacy concern, it does provide organizations with an incredible level of dexterity, which may be the only way to stay in business in tomorrow’s accelerated times.
Final Thoughts
Within a decade, we will live in a world where just about anything that can be measured will be measured— all the time. It will not be your knowledge that matters, but rather the questions you ask.
It’s a world of radical transparency, where privacy concerns will take on a whole new meaning.
From the edge of space to the bottom of the ocean to the inside of your bloodstream, our world’s emerging electric skin is producing a sensorium of endlessly available information. And riding rapid advances in AI, this “skin” possesses the machine learning required to make sense of that information.
Welcome to the hyper-conscious planet.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #AR #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
How AR, AI, Sensors & Blockchain are Merging Into Web 3.0
How AR, AI, Sensors & Blockchain are Merging Into Web 3.0

How each of us sees the world is about to change dramatically…
For all of human history, the experience of looking at the world was roughly the same for everyone. But boundaries between the digital and physical are beginning to fade.
The world around us is gaining layer upon layer of digitized, virtually overlaid information — making it rich, meaningful, and interactive. As a result, our respective experiences of the same environment are becoming vastly different, personalized to our goals, dreams, and desires.
Welcome to Web 3.0, aka The Spatial Web. In version 1.0, static documents and read-only interactions limited the internet to one-way exchanges. Web 2.0 provided quite an upgrade, introducing multimedia content, interactive web pages, and participatory social media. Yet, all this was still mediated by 2D screens.
And today, we are witnessing the rise of Web 3.0, riding the convergence of high-bandwidth 5G connectivity, rapidly evolving AR eyewear, an emerging trillion-sensor economy, and ultra-powerful AIs.
As a result, we will soon be able to superimpose digital information atop any physical surrounding—freeing our eyes from the tyranny of the screen, immersing us in smart environments, and making our world endlessly dynamic.
In this third blog of our five-part series on augmented reality, we will explore the convergence between AR, AI, sensors, and blockchain, diving into the implications through a key use case in manufacturing.
A Tale of Convergence
Let’s deconstruct everything beneath the sleek AR display.
It all begins with Graphics Processing Units (GPUs) — electric circuits that perform rapid calculations to render images. (GPUs can be found in mobile phones, game consoles, and computers.)
However, because AR requires such extensive computing power, single GPUs will not suffice. Instead, blockchain can now enable distributed GPU processing power, and blockchains specifically dedicated to AR holographic processing are on the rise.
Next up, cameras and sensors will aggregate real-time data from any environment to seamlessly integrate physical and virtual worlds. Meanwhile, body-tracking sensors are critical for aligning a user’s self-rendering in AR with a virtually enhanced environment. Depth sensors then provide data for 3D spatial maps, while cameras absorb more surface-level, detailed visual input. In some cases, sensors might even collect biometric data, such as heart rate and brain activity, to incorporate health-related feedback in our everyday AR interfaces and personal recommendation engines.
The next step in the pipeline involves none other than AI. Processing enormous volumes of data instantaneously, embedded AI algorithms will power customized AR experiences in everything from artistic virtual overlays to personalized dietary annotations.
In retail, AIs will use your purchasing history, current closet inventory, and possibly even mood indicators to display digitally rendered items most suitable for your wardrobe, tailored to your measurements.
In healthcare, smart AR glasses will provide physicians with immediately accessible and maximally relevant information (parsed from the entirety of a patient’s medical records and current research) to aid in accurate diagnoses and treatments, freeing doctors to engage in the more human-centric tasks of establishing trust, educating patients and demonstrating empathy.
Convergence in Manufacturing
One of the nearest-term use cases of AR is manufacturing, as large producers begin dedicating capital to enterprise AR headsets. And over the next ten years, AR will converge with AI, sensors, and blockchain to multiply manufacturer productivity and employee experience.
(1) Convergence with AI
In initial application, digital guides superimposed on production tables will vastly improve employee accuracy and speed, while minimizing error rates.
Already, the International Air Transport Association (IATA) — whose airlines supply 82 percent of air travel — recently implemented industrial tech company Atheer’s AR headsets in cargo management. And with barely any delay, IATA reported a whopping 30 percent improvement in cargo handling speed and no less than a 90 percent reduction in errors.
With similar success rates, Boeing brought Skylight’s smart AR glasses to the runway, now used in the manufacturing of hundreds of airplanes. Sure enough—the aerospace giant has now seen a 25 percent drop in production time and near-zero error rates.
Beyond cargo management and air travel, however, smart AR headsets will also enable on-the-job training without reducing the productivity of other workers or sacrificing hardware. Jaguar Land Rover, for instance, implemented Bosch’s Re’flekt One AR solution to gear technicians with “x-ray” vision: allowing them to visualize the insides of Range Rover Sport vehicles without removing any dashboards.
And as enterprise capabilities continue to soar, AIs will soon become the go-to experts, offering support to manufacturers in need of assembly assistance. Instant guidance and real-time feedback will dramatically reduce production downtime, boost overall output, and even help customers struggling with DIY assembly at home.
Perhaps one of the most profitable business opportunities, AR guidance through centralized AI systems will also serve to mitigate supply chain inefficiencies at extraordinary scale. Coordinating moving parts, eliminating the need for manned scanners at each checkpoint, and directing traffic within warehouses, joint AI-AR systems will vastly improve workflow while overseeing quality assurance.
After its initial implementation of AR “vision picking” in 2015, leading courier company DHL recently announced it would continue to use Google’s newest smart lens in warehouses across the world. Motivated by the initial group’s reported 15 percent jump in productivity, DHL’s decision is part of the logistics giant’s $300 million investment in new technologies.
And as direct-to-consumer e-commerce fundamentally transforms the retail sector, supply chain optimization will only grow increasingly vital. AR could very well prove the definitive step for gaining a competitive edge in delivery speeds.
As explained by Vital Enterprises CEO Ash Eldritch, “All these technologies that are coming together around artificial intelligence are going to augment the capabilities of the worker and that’s very powerful. I call it Augmented Intelligence. The idea is that you can take someone of a certain skill level and by augmenting them with artificial intelligence via augmented reality and the Internet of Things, you can elevate the skill level of that worker.”
Already, large producers like Goodyear, thyssenkrupp, and Johnson Controls are using the Microsoft HoloLens 2—priced at $3,500 per headset—for manufacturing and design purposes.
Perhaps the most heartening outcome of the AI-AR convergence is that, rather than replacing humans in manufacturing, AR is an ideal interface for human collaboration with AI. And as AI merges with human capital, prepare to see exponential improvements in productivity, professional training, and product quality.
(2) Convergence with Sensors
On the hardware front, these AI-AR systems will require a mass proliferation of sensors to detect the external environment and apply computer vision in AI decision-making.
To measure depth, for instance, some scanning depth sensors project a structured pattern of infrared light dots onto a scene, detecting and analyzing reflected light to generate 3D maps of the environment. Stereoscopic imaging, using two lenses, has also been commonly used for depth measurements. But leading technology like Microsoft’s HoloLens 2 and Intel’s RealSense 400-series camera implement a new method called “phased time-of-flight” (ToF).
In ToF sensing, the HoloLens 2 uses numerous lasers, each with 100 milliwatts (mW) of power, in quick bursts. The distance between nearby objects and the headset wearer is then measured by the amount of light in the return beam that has shifted from the original signal. Finally, the phase difference reveals the location of each object within the field of view, which enables accurate hand-tracking and surface reconstruction.
With a far lower computing power requirement, the phased ToF sensor is also more durable than stereoscopic sensing, which relies on the precise alignment of two prisms. The phased ToF sensor’s silicon base also makes it easily mass-produced, rendering the HoloLens 2 a far better candidate for widespread consumer adoption.
To apply inertial measurement—typically used in airplanes and spacecraft—the HoloLens 2 additionally uses a built-in accelerometer, gyroscope, and magnetometer. Further equipped with four “environment understanding cameras” that track head movements, the headset also uses a 2.4MP HD photographic video camera and ambient light sensor that work in concert to enable advanced computer vision.
For natural viewing experiences, sensor-supplied gaze tracking increasingly creates depth in digital displays. Nvidia’s work on Foveated AR Display, for instance, brings the primary foveal area into focus, while peripheral regions fall into a softer background— mimicking natural visual perception and concentrating computing power on the area that needs it most.
Gaze tracking sensors are also slated to grant users control over their (now immersive) screens without any hand gestures. Conducting simple visual cues, even staring at an object for more than three seconds, will activate commands instantaneously.
And our manufacturing example above is not the only one. Stacked convergence of blockchain, sensors, AI and AR will disrupt almost every major industry.
Take healthcare, for example, wherein biometric sensors will soon customize users’ AR experiences. Already, MIT Media Lab’s Deep Reality group has created an underwater VR relaxation experience that responds to real-time brain activity detected by a modified version of the Muse EEG. The experience even adapts to users’ biometric data, from heart rate to electro dermal activity (inputted from an Empatica E4 wristband).
Now rapidly dematerializing, sensors will converge with AR to improve physical-digital surface integration, intuitive hand and eye controls, and an increasingly personalized augmented world. Keep an eye on companies like MicroVision, now making tremendous leaps in sensor technology.
While I’ll be doing a deep dive into sensor applications across each industry in our next blog, it’s critical to first discuss how we might power sensor- and AI-driven augmented worlds.
(3) Convergence with Blockchain
Because AR requires much more compute power than typical 2D experiences, centralized GPUs and cloud computing systems are hard at work to provide the necessary infrastructure. Nonetheless, the workload is taxing and blockchain may prove the best solution.
A major player in this pursuit, Otoy aims to create the largest distributed GPU network in the world, called the Render Network RNDR. Built specifically on the Ethereum blockchain for holographic media, and undergoing Beta testing, this network is set to revolutionize AR deployment accessibility.
Alphabet Chairman Eric Schmidt (an investor in Otoy’s network), has even said, “I predicted that 90% of computing would eventually reside in the web based cloud… Otoy has created a remarkable technology which moves that last 10%—high-end graphics processing—entirely to the cloud. This is a disruptive and important achievement. In my view, it marks the tipping point where the web replaces the PC as the dominant computing platform of the future.”
Leveraging the crowd, RNDR allows anyone with a GPU to contribute their power to the network for a commission of up to $300 a month in RNDR tokens. These can then be redeemed in cash or used to create users’ own AR content.
In a double win, Otoy’s blockchain network and similar iterations not only allow designers to profit when not using their GPUs, but also democratize the experience for newer artists in the field.
And beyond these networks’ power suppliers, distributing GPU processing power will allow more manufacturing companies to access AR design tools and customize learning experiences. By further dispersing content creation across a broad network of individuals, blockchain also has the valuable potential to boost AR hardware investment across a number of industry beneficiaries.
On the consumer side, startups like Scanetchain are also entering the blockchain-AR space for a different reason. Allowing users to scan items with their smartphone, Scanetchain’s app provides access to a trove of information, from manufacturer and price, to origin and shipping details.
Based on NEM (a peer-to-peer cryptocurrency that implements a blockchain consensus algorithm), the app aims to make information far more accessible and, in the process, create a social network of purchasing behavior. Users earn tokens by watching ads, and all transactions are hashed into blocks and securely recorded.
The writing is on the wall—our future of brick-and-mortar retail will largely lean on blockchain to create the necessary digital links.
Final Thoughts
Integrating AI into AR creates an “auto-magical” manufacturing pipeline that will fundamentally transform the industry, cutting down on marginal costs, reducing inefficiencies and waste, and maximizing employee productivity.
Bolstering the AI-AR convergence, sensor technology is already blurring the boundaries between our augmented and physical worlds, soon to be near-undetectable. While intuitive hand and eye motions dictate commands in a hands-free interface, biometric data is poised to customize each AR experience to be far more in touch with our mental and physical health.
And underpinning it all, distributed computing power with blockchain networks like RNDR will democratize AR, boosting global consumer adoption at plummeting price points.
As AR soars in importance—whether in retail, manufacturing, entertainment, or beyond—the stacked convergence discussed above merits significant investment over the next decade. Already, 52 Fortune 500 companies have begun testing and deploying AR/VR technology. And while global revenue from AR/VR stood at $5.2 billion in 2016, market intelligence firm IDC predicts the market will exceed $162 billion in value by 2020.
The augmented world is only just getting started.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #d #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth