The Investment Capital Growth Blog

Welcome To The ICG Blog

Strategic Insights For Business Leaders & Their Teams

Investment Capital Growth is dedicated to the personal and professional development of C-Level Executives and the management teams that run modern business. Our blog shares insights and strategies culled from years of entrepreneural and executive experience. Our thought leaders regularly publish business articles to inspire and empower.

Get Inspired, Stay Connected:

  • Subscribe To Our Blog For Updates
  • Follow ICG on Social Media
  • Engage Our Consultants

Subscribe To The ICG Blog

Oops! We could not locate your form.

Posts by Topic

Predictive Mapping with Artificial Intelligence Powerful Combination

Posted by Cliff Locks On September 28, 2020 at 10:30 am / In: Uncategorized

Predictive Mapping with Artificial Intelligence Powerful Combination

Between 2005 and 2014, natural disasters have claimed the lives of over 700,000 people and resulted in total damage of more than US$1.4 trillion.

During the past 50 years, the frequency of recorded natural disasters has surged nearly five-fold.

And as wildfires grow increasingly untamable, wreaking havoc across regions like the Amazon and California, the need for rapid response and smart prevention is higher than ever.

In this blog, I’ll be exploring how converging exponential technologies (AI, Robotics, Drones, Sensors, Networks) are transforming the future of disaster relief — how we can prevent catastrophe in the first place and get help to victims during that first golden hour wherein immediate relief can save lives.

Here are the three areas of greatest impact:

  1. AI, predictive mapping, and the power of the crowd
  2. Next-gen robotics and swarm solutions
  3. Aerial drones and immediate aid supply

Let’s dive in!

When it comes to immediate and high-precision emergency response, data is gold

Already, the meteoric rise of space-based networks, stratosphere-hovering balloons, and 5G telecommunications infrastructure is in the process of connecting every last individual on the planet.

Aside from democratizing the world’s information, however, this upsurge in connectivity will soon grant anyone the ability to broadcast detailed geotagged data, particularly those most vulnerable to natural disasters. 

Armed with the power of data broadcasting and the force of the crowd, disaster victims now play a vital role in emergency response, turning a historically one-way blind rescue operation into a two-way dialogue between connected crowds and smart response systems.

With a skyrocketing abundance of data, however, comes a new paradigm: one in which we no longer face a scarcity of answers. Instead, it will be the quality of our questions that matters most. 

This is where AI comes in: our mining mechanism.

In the case of emergency response, what if we could strategically map an almost endless amount of incoming data points? Or predict the dynamics of a flood and identify a tsunami’s most vulnerable targets before it even strikes? Or even amplify critical signals to trigger automatic aid by surveillance drones and immediately alert crowdsourced volunteers? 

Already, a number of key players are leveraging AI, crowdsourced intelligence, and cutting edge visualizations to optimize crisis response and multiply relief speeds.

Take One Concern, for instance.

Born out of Stanford under the mentorship of leading AI expert Andrew Ng, One Concern leverages AI through analytical disaster assessment and calculated damage estimates.

Partnering with the City of Los Angeles, San Francisco, and numerous cities in San Mateo County, the platform assigns verified, unique ‘digital fingerprints’ to every element in a city. Building robust models of each system, One Concern’s AI platform can then monitor site-specific impacts of not only climate change but each individual natural disaster, from sweeping thermal shifts to seismic movement.

This data, combined with that of city infrastructure and former disasters, are then used to predict future damage under a range of disaster scenarios, informing prevention methods and structures in need of reinforcement.

Within just four years, One Concern can now make precise predictions with an 85 percent accuracy rate under 15 minutes.

And as IoT-connected devices and intelligent hardware continue to boom, a blooming trillion-sensor economy will only serve to amplify AI’s predictive capacity, offering us immediate, preventive strategies long before disaster strikes.

Take forest fires, for instance.

Utah University atmospheric scientist Adam Kochanski and a team of researchers are now refining a computer model with new data to predict how fires will spread and what weather events will follow in their wake.

Initiating a “prescribed fire” — a controlled fire typically intended for habitat restoration in forest regions — the team used numerous infrared camera-fitted drones, laser scanning, and sensors to collect data while Kochanski tested his predictive model’s forecasts.

While generated data is still being processed, the experiment is contributing to ‘coupled fire-atmosphere models,’ which leverage data to determine how wildfires influence local weather conditions, and the interaction of the two. Yet already, Kochanski’s model proved remarkably predictive of the experimental fire’s actual behavior.

Paired with robust networks of sensors and autonomous drone fleets, computer models that incorporate weather conditions in AI forest fire mapping could help us to stem early fires before they gain momentum, saving forests, lives, and entire habitats.

As mobile connectivity and abundant sensors converge with AI-mined crowd intelligence, real-time awareness will only multiply in speed and scale.

Imagining the Future….

Within the next 10 years, spatial web technology might even allow us to tap into mesh networks. 

In short, this means that individual mobile users can together establish a local mesh network using nothing but the compute power in their own devices.

Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network. 

Imagine a scenario in which armed attacks break out across disjointed urban districts, each cluster of eye witnesses and at-risk civilians broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram in real time, giving family members and first responders complete information.

Or take a coastal community in the throes of torrential rainfall and failing infrastructure. Now empowered by a collective live feed, verification of data reports takes a matter of seconds, and richly layered data informs first responders and AI platforms with unbelievable accuracy and specificity of relief needs.

By linking all the right technological pieces, we might even see the rise of automated drone deliveries. Imagine: crowdsourced intelligence is first cross-referenced with sensor data and verified algorithmically. AI is then leveraged to determine the specific needs and degree of urgency at ultra-precise coordinates. Within minutes, once approved by personnel, swarm robots rush to collect the requisite supplies, equipping size-appropriate drones with the right aid for rapid-fire delivery.

This brings us to a second critical convergence: robots and drones.

While cutting-edge drone technology revolutionizes the way we deliver aid, new breakthroughs in AI-geared robotics are paving the way for superhuman emergency responses in some of today’s most dangerous environments. 

Let’s explore a few of the most disruptive examples to reach the testing phase.

First up….

Autonomous Robots and Swarm Solutions

As hardware advancements converge with exploding AI capabilities, disaster relief robots are graduating from assistance roles to fully autonomous responders at a breakneck pace.

Born out of MIT’s Biomimetic Robotics Lab, the Cheetah III is but one of many robots that may form our first line of defense in everything from earthquake search-and-rescue missions to high-risk ops in dangerous radiation zones.

Now capable of running at 6.4 meters per second, Cheetah III can even leap up to a height of 60 centimeters, autonomously determining how to avoid obstacles and jump over hurdles as they arise.

MIT Cheetah III

Source: Massachusetts Institute of Technology (MIT) 

Initially designed to perform spectral inspection tasks in hazardous settings (think: nuclear plants or chemical factories), the Cheetah’s various iterations have focused on increasing its payload capacity, range of motion, and even a gripping function with enhanced dexterity.

But as explained by the Lab’s director and MIT Associate Professor Sangbae Kim, Cheetah III and future versions are aimed at saving lives in almost any environment: “Let’s say there’s a fire or high radiation, [whereby] nobody can even get in. [It’s in these circumstances that] we’re going to send a robot [to] check if people are inside. [And even] before doing all that, the short-term goal will be sending robot where we don’t want to send humans at all, […] for example, toxic areas or [those with] mild radiation.”

And the Cheetah III is not alone.

This past February, Tokyo’s Electric Power Company (TEPCO) put one of its own robots to the test.

For the first time since Japan’s devastating 2011 tsunami, which led to three nuclear meltdowns in the nation’s Fukushima nuclear power plant, a robot has successfully examined the reactor’s fuel.

Broadcasting the process with its built-in camera, the robot was able to retrieve small chunks of radioactive fuel at five of the six test sites, offering tremendous promise for long-term plans to clean up the still-deadly interior.

Also out of Japan, Mitsubishi Heavy Industries (MHi) is even using robots to fight fires with full autonomy. In a remarkable new feat, MHi’s Water Cannon Bot can now put out blazes in difficult-to-access or highly dangerous fire sites.

Delivering foam or water at 4,000 liters per minute and 1 megapascal (MPa) of pressure, the Cannon Bot and its accompanying Hose Extension Bot even form part of a greater AI-geared system to conduct reconnaissance and surveillance on larger transport vehicles.

As wildfires grow ever more untamable, high-volume production of such bots could prove a true lifesaver. Paired with predictive AI forest fire mapping and autonomous hauling vehicles, not only will solutions like MHi’s Cannon Bot save numerous lives, but avoid population displacement and paralyzing damage to our natural environment before disaster has the chance to spread.

But even in cases where emergency shelter is needed, groundbreaking (literally) robotics solutions are fast to the rescue.

After multiple iterations by Fastbrick Robotics, the Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square meter home in under 3 days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.

Hadrian Bricklaying Robot

Source: Fastbrick Robotics

Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long-term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.

But what if we need to build emergency shelters from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute of Advanced Architecture of Catalonia (IAAC) is already working on a solution.

In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.

But while cutting edge robotics unlock extraordinary new frontiers for low-cost, large-scale emergency construction, novel hardware and computing breakthroughs are also enabling robotic scale at the other extreme of the spectrum.

Again, inspired by biological phenomena, robotics specialists across the U.S. have begun to pilot tiny robotic prototypes for locating trapped individuals and assessing infrastructural damage.

Take RoboBees, tiny Harvard-developed bots that use electrostatic adhesion to ‘perch’ on walls and even ceilings, evaluating structural damage in the aftermath of an earthquake. 

Or Carnegie Mellon’s prototyped Snakebot, capable of navigating through entry points that would otherwise be completely inaccessible to human responders. Driven by AI, the Snakebot can maneuver through even the most densely packed rubble to locate survivors, using cameras and microphones for communication.

But when it comes to fast-paced reconnaissance in inaccessible regions, miniature robot swarms have good company.

Next-Generation Drones for Instantaneous Relief Supplies

Particularly in the case of wildfires and conflict zones, autonomous drone technology is fundamentally revolutionizing the way we identify survivors in need and automate relief supply.

Not only are drones enabling high-resolution imagery for real-time mapping and damage assessment, but preliminary research shows that UAVs far outpace ground-based rescue teams in locating isolated survivors.

As presented by a team of electrical engineers from the University of Science and Technology of China, drones could even build out a mobile wireless broadband network in record time using a “drone-assisted multi-hop device-to-device” program.

And as shown during Houston’s Hurricane Harvey, drones can provide scores of predictive intel on everything from future flooding to damage estimates.

Among multiple others, a team led by Texas A&M computer science professor and director of the university’s Center for Robot-Assisted Search and Rescue Dr. Robin Murphy flew a total of 119 drone missions over the city, from small-scale quadcopters to military-grade unmanned planes. Not only were these critical for monitoring levee infrastructure, but also for identifying those left behind by human rescue teams.

But beyond surveillance, UAVs have begun to provide lifesaving supplies across some of the most remote regions of the globe.

One of the most inspiring examples to date is Zipline.

Created in 2014, Zipline has completed 12,352 life-saving drone deliveries to date. While drones are designed, tested and assembled in California, Zipline primarily operates in Rwanda and Tanzania, hiring local operators and providing over 11 million people with instant access to medical supplies.

Providing everything from vaccines and HIV medications to blood and IV tubes, Zipline’s drones far outpace ground-based supply transport, in many instances providing life-critical blood cells, plasma and platelets in under an hour.

Zipline Drones

Source: Zipline

But drone technology is even beginning to transcend the limited scale of medical supplies and food.

Now developing its drones under contracts with DARPA and the U.S. Marine Corps, Logistic Gliders, Inc. has built autonomously navigating drones capable of carrying 1,800 pounds of cargo over unprecedented long distances. 

Built from plywood, Logistic’s gliders are projected to cost as little as a few hundred dollars each, making them perfect candidates for high-volume, remote aid deliveries, whether navigated by a pilot or self-flown in accordance with real-time disaster zone mapping.

As hardware continues to advance, autonomous drone technology coupled with real-time mapping algorithms pose no end of abundant opportunities for aid supply, disaster monitoring, and richly layered intel previously unimaginable for humanitarian relief. 

Concluding Thoughts

Perhaps one of the most consequential and impactful applications of converging technologies is their transformation of disaster relief methods.

While AI-driven intel platforms crowdsource firsthand experiential data from those on the ground, mobile connectivity and drone-supplied networks are granting newfound narrative power to those most in need.

And as a wave of new hardware advancements gives rise to robotic responders, swarm technology and aerial drones, we are fast approaching an age of instantaneous and efficiently distributed responses, in the midst of conflict and natural catastrophes alike.

Empowered by these new tools, what might we create when everyone on the planet has the same access to relief supplies and immediate resources? In a new age of prevention and fast recovery, what futures can you envision?

Board of Directors | Board of Advisors | Strategic Leadership

Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: Cliff@InvestmentCapitalGrowth.com or Schedule a call: Cliff Locks

Download Resume

Download Resume (PDF)

#5G #Automotive #BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #AR #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth

Let’s get you educated on the software infrastructure in computing and the impact on your business

Posted by Cliff Locks On October 23, 2019 at 10:06 am / In: Uncategorized

Let’s get you educated on the software infrastructure in computing and the impact on your business

The AR Cloud

As AR hardware advances within its deceptive growth phase, the business opportunity for AR content creators is now—whether building virtual universes or digitizing our physical one.

But to create multi-player games, social media communities, and messaging platforms linked to the same physical space, a centralized AR Cloud must first unify all headsets within a synced virtual overlay.

Just as search engines like Google serve multiple operating systems, the AR Cloud will serve every headset. Yet unlike today’s Cloud computing infrastructure, the AR Cloud will need to churn constant input-output loops in real-time, crunching and serving up far more data than we can currently comprehend.

While most AR apps available today offer one-time wonders like furniture try-outs or human anatomy lessons, AR-native apps linked to daily tasks in the physical world will change the way we do everything.

“A real-time 3D (or spatial) map of the world, the AR Cloud, will be the single most important software infrastructure in computing,” believes Ori Inbar, co-founder of Augmented World Expo. “In a nutshell, with the AR Cloud, the entire world becomes a shared spatial screen, enabling multi-user engagement and collaboration.”

But the AR Cloud is also set to transform how information is organized. Currently, we actively input our questions and find answers through 2D mediums. But the AR Cloud will soon enable a smart environment that feeds us what is relevant, when relevant.

Local business that are inherently pertinent to you and your problems will auto-populate individualized data in your AR interface. Individuals’ backgrounds will pop up at networking events, particularly of those who share your industry, interests or might be great partners for your next joint venture. That computing system you’ve just been shipped will guide you interactively through the assembly process—just give it a gaze and activate instructions with a blink.

Technological Requirements

But how do we actually build the AR Cloud?

As I’ve mentioned in previous blogs, the closest we have come to a widespread communal AR experience was Pokémon Go. To function, the game’s servers store geolocation, player activity, and specific location data. But even in the case of this sophisticated online-merge-offline AR experience, there is no shared memory of activities occurring in each location.

In tomorrow’s AR Cloud, a centralized AR backend would incorporate shared memory data, allowing us both individual gamification and seamless shared experience.

But to do so, the AR Cloud requires us to perfect point cloud capture, a method of capturing and reconstructing 3D areas. Several techniques—laser scanners like LiDAR, depth sensors like Kinect, or drone and satellite camera footage—will together enable a universal, high-integrity point cloud.

Along a similar vein, a tremendous upcoming business challenge involves inputting scans from countless hardware devices and outputting data accessible to a range of platforms. I.e., the process of digitizing and updating every square foot of physical space as user-worn sensors collect data.

To achieve this, we might think of solutions similar to (but far more sophisticated than) Google Tango’s “area learning,” wherein devices use camera footage and location data to recognize places they’ve seen before. Depth sensing and motion tracking will also play a critical role in environment creation.

And in terms of AR self-orientation, companies will need to develop universal localizers that give devices ultra-fast positional awareness. In this instance, crowdsourced 3D mesh stitching might be employed to stitch together all data generated by AR users, thereby recreating digital versions of shared physical environments.

Finally, the AR Cloud will ride on massive surges in connectivity. As 5G, balloons and satellite networks proliferate worldwide, latency (i.e. the delay in data transfer) will vastly improve across AR devices, allowing constant real-time updates to the cloud.

Even today, network giants like Cisco, Microsoft, and IBM are already starting to tackle the AR Cloud’s infrastructural components.

Take Cisco, which now innovates across various IoT platform solutions—think: Cisco Kinetic, Cisco Jasper, and Cisco DNA (Digital Network Architecture)—supporting the ever-increasing bandwidth needs of smart, connected devices.

Or global non-profit Open AR Cloud (OARC), which spans projects from spatial indexing, to edge-computing and 5G, to security and privacy.

The Implications….

So what does it all mean?

Instant skills training: Anyone capable of following decent audiovisual explanations can become an expert on anything, whether in the middle of NYC or in rural Bangladesh, on-demand.

Screens go away: Your AR headset can project your watch, phone screen, health metrics, entertainment, anywhere and to the scale you desire. We first dematerialized radios, calculators, measuring tapes, and almost every computing tool into a handheld device. But now, we are dematerializing screens themselves— seeing through interfaces, not looking into them.

Control what you see: Eliminate what you don’t want to see and populate ordinary environments with your desired reality. Your office floor becomes a calm pond, your windows a mountain view. Your kids might be surrounded by open canvases, how-it-works rundowns on any tool, or written vocabulary as you speak to them. Imagine telling your AI, “every time you see a coffee cup in the world, fill it with flowers.”

Never forget anyone’s name or birthday: The combination of facial recognition, AR and AI will allow you to recognize anyone by name. You immediately know a familiar face when you see one, how you know that person, and relevant information at the right time.

Instantly recognize any “thing:” Look at any tool, piece of art, product (you name it!) and know exactly who made it, what it costs, what it does, how it might be assembled or disassembled, and the supply chain that brought it about.

Advent of digital fashion: Digital garments are overlaid seamlessly on your body in the AR Cloud, and digital copies of yourself might model new styles or innovative fashion ideas at whim. You can control who sees you in what clothing. Your colleagues can see you wearing one outfit, pedestrians another, your family members a third.

Training your AI: AR headwear will know where you’re looking, tracking your facial expressions, eye dilation and focus—all working with your personal AI to learn what you love, how you think, and what catches your imagination most.

Final Thoughts

Consider how companies, governments, artists and leaders will vie for priority in presenting AR-delivered data to your visual cortex.

Or ponder how you (or your AI) will curate your digital world. How you might maintain privacy (of which information and how much?). Do you want people looking at you to know your name? Your profession or birthdate?

AR will not only transform our world. It will fundamentally redefine it. Your combined AR/AI system can help you focus on what’s important, block out distractions, or help lift your mood when required.

The convergence of AR, gigabit/low-latency networks (such as 5G), IoT (i.e. sensors), AI and Blockchain is about to change almost every industry in the decade ahead, and create more opportunity for wealth creation than was possible in the past century!

Entrepreneurs pay attention! Consider these two economic predictions to understand the magnitude of what is coming.

  • First, McKinsey predicts that IoT will create $6.2 TRILLION of new economic value by 2025.
  • Second, Gartner predictions that AI augmentation will create $2.9 TRILLION of business value and 6.2 billion hours of worker productivity globally by 2021.

AR will play heavily in both. My advice to everyone… DON’T BLINK!

Board of Directors | Board of Advisors | Strategic Leadership

Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: Cliff@InvestmentCapitalGrowth.com or Schedule a call: Cliff Locks

Download Resume

Download Resume (PDF)

#BoardofDirectors #BoD #artificialintelligence #3DPrinting #AI #innovation #IoT #virtualreality #vr #d #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth

How Augmented Reality (AR) will change your industry

Posted by Cliff Locks On October 16, 2019 at 10:06 am / In: Uncategorized

How Augmented Reality (AR) will change your industry

Augmented Reality (AR) has already exceeded over 2,000 AR apps on over 1.4 billion active iOS devices. Even if on a rudimentary level, the technology is now permeating the consumer products space.

And in just the next four years, the International Data Corporation (IDC) forecasts AR headset production will surge 141 percent each year, reaching a whopping 32 million units by 2023.

AR will soon serve as a surgeon’s assistant, a sales agent, and an educator, personalized to your kids’ learning patterns and interests.

In this fourth installment of our five-part AR series, I’m doing a deep dive into AR’s most exciting industry applications, poised to hit the market in the next 5-10 years.

Let’s dive in.

Healthcare 

(1) Surgeons and physicians: 

Whether through detailed and dynamic anatomical annotations or visualized patient-specific guidance, AR will soon augment every human medical practitioner.

To start, AR is already being used as a diagnosis tool. SyncThink, recently hired by Magic Leap, has developed eye-tracking technology to diagnose concussions and balance disorders. Yet another startup, XRHealth, launched its ARHealth platform on Magic Leap to aid in rehabilitation, pain distraction, and psychological assessment.

SyncThink

Moreover, surgeons at the Imperial College London have used Microsoft’s HoloLens 1 in pre-operative reconstructive and plastic surgery procedures, which typically involves using CT scans to map blood vessels that supply vital nutrients during surgery.

As explained by the project’s senior researcher, Dr. Philip Pratt, “With the HoloLens, we’re now doing the same kind of [scan] and then processing the data captured to make it suitable to look at. That means we end up with a silhouette of a limb, the location of the injury, and the course of the vessels through the area, as opposed to this grayscale image of a scan and a bit more guesswork.”

Dramatically lowering associated risks, AR can even help surgeons visualize the depth of vessels and choose the optimal incision location.

And while the HoloLens 1 was only used in pre-op visualizations, Microsoft’s HoloLens 2 is on track to reach the operating table. Take Philips’ Azurion image-guided therapy platform, for instance. Built specifically for the HoloLens 2, Azurion strives to provide surgeons with real-time patient data and dynamic 3D imagery as they operate.

Moreover, AR headsets and the virtual overlays they provide will exponentially improve sharing of expertise across hospitals and medical practices. Niche medical specialists will be able to direct surgeons remotely from across the country (not to mention the other side of the planet), or even view annotated AR scans to offer their advice.

Magic Leap, in its own right, is now collaborating with German medical company Brainlab to create a 3D spatial viewer that would allow clinicians to work together in surgical procedures across disciplines.

Brain Lab

But beyond democratizing medical expertise, AR will even provide instantaneous patient histories, gearing doctors with AI-processed information for more accurate diagnoses in a fraction of the time.

By saving physicians’ time, AR will therefore free doctors to spend a greater percentage of their day engaging in face-to-face contact with their patients, establishing trust, compassion, and an opportunity to educate healthcare consumers (rather than merely treating them).

And when it comes to digital records, doctors can simply use voice control to transcribe entire interactions and patient visits, multiplying what can be done in a day, and vastly improving the patient experience.

(2) Assistance for those with disabilities: 

Today, over 3.4 million visually impaired individuals reside in the U.S. alone. But thanks to new developments in the AI-integrated smart glasses realm, associated constraints could soon fade in severity.

And new pioneers continue to enter the market, including NavCog, Horus, AIServe, and MyEye, among others. Microsoft has even begun development of a “Seeing AI” app, which translates the world into audio descriptions for the blind, as seen through a smartphone’s camera lens.

Vision of Children Foundation

During the Reality Virtual Hackathon in January, hosted by Magic Leap at MIT, two of the top three winners catered to disabilities. CleARsite provided environment reconstruction, haptic feedback, and Soundfield Audio overlay to enhance a visually impaired individual’s interaction with the world. Meanwhile, HeAR used a Magic Leap 1 headset to translate vocals or sign language into readable text in speech bubbles in the user’s field of view. Magic Leap remains dedicated to numerous such applications, each slated to vastly improve quality of life.

(3) Biometric displays:

In biometrics, cyclist sunglasses and swimmer goggles have evolved into the perfect medium for AR health metric displays. Smart glasses like the Solos ($499) and Everysight Raptors ($599) provide cyclists with data on speed, power, and heart rate, along with navigation instructions. Meanwhile, Form goggles ($199)—just released at the end of August—show swimmers their pace, calories burned, distance, and stroke count in real-time, up to 32 feet underwater.

SENTHIN1 Scientific Triathlon

Accessible health data will shift off our wrists and into our fields of view, offering us personalized health recommendations and pushing our training limits alike.

Retail & Advertising

(1) Virtual shopping:

The year is 2030. Walk into any (now AI-driven, sensor-laden, and IoT-retrofitted) store, and every mannequin will be wearing a digital design customized to your preferences. Forget digging through racks of garments or hunting down your size. Cross-referencing your purchase history, gaze patterns, and current closet inventory, AIs will display tailor-made items most suitable for your wardrobe, adjusted to your individual measurements.

Automation Tags – Pricing

An app available on most Android smartphones, Google Lens is already leaping into this marketplace, allowing users to scan QR codes and objects through their smartphone cameras. Within the product, Google Lens’s Style Match feature even gives consumers the capability to identify pieces of clothing or furniture and view similar designs available online and through e-commerce platforms.

(2) Advertising:

And these mobile AR features are quickly encroaching upon ads as well.

In July, the New York Times debuted an AR ad for Netflix’s “Stranger Things,” for instance, guiding smartphone users to scan the page with their Google Lens app and experience the show’s fictional Starcourt Mall come to life.

APP DEVELOPER MAGAZINE
Source: App Developer Magazine.

But immersive AR advertisements of the future won’t all be unsolicited and obtrusive. Many will likely prove helpful.

As you walk down a grocery store aisle, discounts and special deals on your favorite items might populate your AR smart glasses. Or if you find yourself admiring an expensive pair of pants, your headset might suggest similar items at a lower cost, or cheaper distributors with the same product. Passing a stadium on the way to work, next weekend’s best concert ticket deals might filter through your AR suggestions—whether your personal AI intends them for your friend’s upcoming birthday or your own enjoyment.

Instead of bombarding you at every turn on a needed handheld device, ads will appear only when most relevant to your physical surroundings— or toggle them off, and have your personal AI do the product research for you.

Education & Travel

(1) Customized, continuous learning:

The convergence of today’s AI revolution with AR advancements gives us the ability to create individually customized learning environments.

Throw sensors in the mix for tracking of neural and physiological data, and students will soon be empowered to better mediate a growth mindset, and even work towards achieving a flow state (which research shows can vastly amplify learning).

AR – Travel

Within the classroom, Magic Leap One’s Lumin operating system allows multiple wearers to share in a digital experience, such as a dissection or historical map. And from a collaborative creation standpoint, students can use Magic Leap’s CAD application to join forces on 3D designs.

In success, AR’s convergence with biometric sensors and AI will give rise to an extraordinarily different education system: one comprised of delocalized, individually customizable, responsive, and accelerated learning environments.

Continuous and learn-everywhere education will no longer be confined to the classroom. Already, numerous AR mobile apps can identify objects in a user’s visual field, instantaneously presenting relevant information. As user interface hardware undergoes a dramatic shift in the next decade, these software capabilities will only explode in development and use.

Gazing out your window at a cloud will unlock interactive information about the water cycle and climate science. Walking past an old building, you might effortlessly learn about its history dating back to the sixteenth century. I often discuss information abundance, but it is data’s accessibility that will soon drive knowledge abundance. 

(2) Training:

AR will enable on-the-job training at far lower costs in almost any environment, from factories to hospitals.

Smart glasses are already beginning to guide manufacturing plant employees as they learn how to assemble new equipment. Retailers stand to decimate the time it takes to train a new employee with AR tours and product descriptions.

And already, automotive technicians can better understand the internal components of a vehicle without dismantling it. Jaguar Land Rover, for instance, has recently implemented Bosch’s Re’flekt One AR solution. Training technicians with “x-ray” vision, the AR service thereby allows them to visualize the insides of Range Rover Sport vehicles without removing their dashboards.

In healthcare, medical students will be able to practice surgeries on artificial cadavers with hyper-realistic AR displays. Not only will this allow them to rapidly iterate on their surgical skills, but AR will dramatically lower the cost and constraints of standard medical degrees and specializations.

Meanwhile, sports training in simulators will vastly improve with advanced AR headset technology. Even practicing chess or piano will be achievable with any tabletop surface, allowing us to hone real skills with virtual interfaces.

(3) Travel:

As with most tasks, AI’s convergence with AR glasses will allow us to outsource all the most difficult (and least enjoyable) decisions associated with travel, whether finding the best restaurants or well-suited local experiences.

But perhaps one of AR’s more sophisticated uses (already rolling out today) involves translation. Whether you need to decode a menu or access subtitles while conversing across a language barrier, instantaneous translation is about to improve exponentially with the rise of AI-powered AR glasses. Even today, Google Translate can already convert menu text and street signs in real time through your smartphone.

Manufacturing

As I explored last week, manufacturing presents the nearest-term frontier for AR’s commercial use. As a result, many of today’s leading headset companies—including Magic Leap, Vuzix, and Microsoft—are seeking out initial adopters and enterprise applications in the manufacturing realm.

ARM BLUEPRINT
Source: Arm Blueprint.

(1) Design:

Targeting the technology for simulation purposes, Airbus launched an AR model of the MRH-90 Taipan aircraft just last year, allowing designers and engineers to view various components, potential upgrades, and electro-optical sensors before execution. Saving big on parts and overhead costs, Airbus thereby gave technicians the opportunity to make important design changes without removing their interaction with the aircraft.

(2) Supply chain optimization: 

AR guidance linked to a centralized AI will also mitigate supply chain inefficiencies. Coordinating moving parts, eliminating the need to hold a scanner at each checkpoint, and directing traffic within warehouses will vastly improve workflow.

After initially implementing AR “vision picking” in 2015, leading supply company DHL recently announced it would continue to use the newest Google smart lens in warehouses across the world. Or take automotive supplier ZF, which has now rolled out use of the HoloLens in plant maintenance.

Jasoren -notice the green arrow projected on the floor and the product photo and pick quantity. Improve your order picking and reduce costs.

(3) Quality assurance & accessible expertise:

AR technology will also play a critical role in quality assurance, as it already does in Porsche’s assembly plant in Leipzig, Germany. Whenever manufacturers require guidance from engineers, remote assistance is effectively no longer remote, as equipment experts guide employees through their AR glasses and teach them on the job.

Transportation & Navigation

(1) Autonomous vehicles:

To start, Nvidia’s Drive platform for Level 2+ autonomous vehicles is already combining sensor fusion and perception with AR dashboard displays to alert drivers of road hazards, highlight points of interest, and provide navigation assistance.

NEXT REALITY
Source: Next Reality – Augmented Reality News.

And in our current transition phase of partially autonomous vehicles, such AR integration allows drivers to monitor conditions yet eases the burden of constant attention to the road. Along these lines, Volkswagen has already partnered with Nvidia to produce I.D. Buzz electric cars, set to run on the Drive OS by 2020. And Nvidia’s platform is fast on the move, having additionally partnered with Toyota, Uber, and Mercedes-Benz. Within just the next few years, AR displays may be commonplace in these vehicles.

(2) Navigation:

THE VERGE
Source: The Verge.

We’ve all seen (or been) that someone spinning around with their smartphone to decipher the first few steps of a digital map’s commands. But AR is already making everyday navigation intuitive and efficient.

Google Maps’ AR feature has already been demoed on Pixel phones: instead of staring at your map from a bird’s eye view, users direct their camera at the street, and superimposed directions are immediately layered virtually on top.

Not only that, but as AI identifies what you see, it instantaneously communicates with your GPS to pinpoint your location and orientation. Although a mainstream rollout date has not yet been announced, this feature will likely make it to your phone in the very near future.

Entertainment

(1) Gaming:

We got our first taste of AR’s real-world gamification in 2016, when Nintendo released Pokémon Go. And today, the gaming app has now surpassed 1 billion downloads. But by contrast to VR, AR is increasingly seen as a medium for bringing gamers together in the physical world, encouraging outdoor exploration, activity, and human connection in the process.

And in the recently exploding eSports industry, AR has the potential to turn player’s screens into live action stadiums. Just this year, the global eSports market is projected to exceed US$1.1 billion in revenue, and AR’s potential to elevate the experience will only see this number soar.

(2) Art:

Many of today’s most popular AR apps allow users to throw dinosaurs into their surroundings (Monster Park), learn how to dance (Dance Reality), or try on highly convincing virtual tattoos (InkHunter).

And as high-definition rendering becomes more commonplace, art will, too, grow more and more accessible.

Magic Leap aims to construct an entire “Magicverse” of digital layers superimposed on our physical reality. Location-based AR displays, ranging from art installations to gaming hubs, will be viewable in a shared experience across hundreds of headsets. Individuals will simply toggle between modes to access whichever version of the universe they desire. Endless opportunities to design our surroundings will arise.  

Apple, in its own right, recently announced the company’s [AR]T initiative, which consists of floating digital installations. Viewable through [AR]T Viewer apps in Apple stores, these installations can also be found in [AR]T City Walks guiding users through popular cities, and [AR]T Labs, which teach participants how to use Swift Playgrounds (an iPad app) to create AR experiences.

(3) Shows:

And at the recent Siggraph Conference in Los Angeles, Magic Leap introduced an AR-theater hybrid called Mary and the Monster, wherein viewers watched a barren “diorama-like stage” come to life in AR.

VENTURE BEAT

Source: Venture Beat.

While audience members shared the common experience like a traditional play, individuals could also zoom in on specific actors to observe their expressions more closely.

Say goodbye to opera glasses and hello to AR headsets.

Final Thoughts

While AR headset manufacturers and mixed reality developers race to build enterprise solutions from manufacturing to transportation, AR’s use in consumer products is following close behind.

Magic Leap leads the way in developing consumer experiences we’ve long been waiting for, as the “Magicverse” of localized AR displays in shared physical spaces will reinvent our modes of connection.

And as AR-supportive hardware is now built into today’s newest smartphones, businesses have an invaluable opportunity to gamify products and immerse millions of consumers in service-related AR experiences.

Even beyond the most obvious first-order AR business cases, new industries to support the augmented world of 2030 will soon surge in market competition, whether headset hardware, data storage solutions, sensors, or holograph and projection technologies.

Jump on the bandwagon now— the future is faster than you think!

Board of Directors | Board of Advisors | Strategic Leadership

Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: Cliff@InvestmentCapitalGrowth.com or Schedule a call: Cliff Locks

Download Resume

Download Resume (PDF)

#BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #AR #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth

Exponential CHINA – something you can’t ignore and why you should be educated on their tech market and the size of their GDP

Posted by Cliff Locks On June 26, 2019 at 10:04 am / In: Uncategorized

Exponential CHINA – something you can’t ignore and why you should be educated on their tech market and the size of their GDP

President Donald Trump with China’s President Xi Jinping during their bilateral meeting at the G20 Summit, Saturday, Dec. 1, 2018 in Buenos Aires, Argentina. (AP Photo/Pablo Martinez Monsivais)

The rise of China as an epicenter of rapid-fire innovation and technological disruption is more important than ever before in transforming each and every one of our businesses.

Soon to surpass the U.S. as the world’s largest economy with a $14 trillion GDP, China has grown at about 9.6 percent CAGR since 1989, accounting for over an estimated 35 percent of global economic growth from 2017 to 2019 — nearly double the U.S. GDP’s predicted 18 percent.

In less than two years, China’s government plan to lead the world in AI by 2030 has witnessed a record-breaking surge in both AI enterprise and innovation across biotech and longevity, smart manufacturing, autonomous and electric vehicles, next-gen renewable energies, and new tech-driven markets you’ve never even heard of.

As Eric Schmidt has explained, “it’s pretty simple. By 2020, they will have caught up. By 2025, they will be better than us. By 2030, they will dominate the industries of AI.”

And the figures don’t lie.

PricewaterhouseCoopers recently projected AI’s deployment will add $15.7 trillion to the global GDP by 2030, with China taking home $7 trillion of that total, dwarfing North America’ $3.7 trillion in gains. 

Behind the scenes, a growing force of driven AI entrepreneurs trains cutting-edge algorithms on some of the largest datasets available to date.

Key Takeaways: China, AI & The Future of Your Business

While countless Chinese startups still observe U.S. company strategies and iterate on Western business trends, today’s American businesses would be severely remiss to forego analysis of Chinese companies and best practices. 

At the peak of its ongoing heyday, China is undergoing a technological renaissance bolstered by thousands of startups and dozens of multibillion-dollar tech companies. As firms like Alibaba, Tencent and Xiaomi capitalize on China’s now 1 billion+ Internet users at home, burgeoning Chinese unicorns have now established thriving markets among Western consumers.

Yet for many, what drives the business of China remains shrouded in mystery. What’s driving this entrepreneurial ecosystem? How can entrepreneurs collaborate with China or model its innovative culture closer to home?

This blog presents key insights from the go-to experts in China: Dr. Kai-Fu Lee of Sinovation Ventures and David Li of Shenzhen Open Innovation Lab.

Dr. Kai-Fu Lee serves as chairman and CEO of Sinovation Ventures, a venture fund with US$2B AUM. Dr. Lee has previously served as President of Google China, held numerous executive positions, including at Microsoft, Apple, and SGI, and named one of the top hundred most influential people by TIME magazine.

David Li is Founder and Director of Shenzhen Open Innovation Lab (SZOIL) and has been honored with the title of “Top Maker in Asia.” One of the foremost leaders of Shenzhen’s Maker Movement, Mr. Li co-founded Maker Collider, a platform to develop next-gen IoT from the maker community; XinCheJian, the first hacker space in China to promote hacker maker culture and open source hardware; among others.

When thinking about the AI industry and the underlying technological architecture as a whole, one tool I find highly useful is Kai-Fu Lee’s “Four Waves of AI.” As outlined in his diagram below, these consist of Internet AI, Business AI, Perception AI, and the most nascent wave, Autonomous AI.

The Four Waves of AI. Source: Dr. Kai-Fu Lee.

Yet while U.S. tech giants may have birthed the first two waves of AI, the data emerging today stands compellingly in China’s favor across all four categories.

China’s Competitive Advantage

Witnessing an explosion in grassroots innovation, smart manufacturing, abundant and high-quality data, China is surging ahead on the back of several core drivers: 

(1) Massive proliferation of engineers and entrepreneurs: Demand for AI engineers has skyrocketed in the matter of a mere three years across China. The proliferation of local entrepreneurs, makers, tinkerers, and AI scholars is on a scale unimaginable by U.S. businesses, CEOs and VCs.

But underpinning these numbers is a radically different cultural approach to innovation and an abundance mindset about tools and (lack of) constraints in breaking open new markets. 

As David Li synthesizes, “What’s happening in China is not just headline innovation. It’s what’s going on at the bottom side of China. People are excited. Entrepreneurship is everywhere. Opportunity is everywhere. And [common citizens] are supported by this rapid advancement of technology. They don’t see technology coming and say, ‘My God! It’s going to take my job.’ […] They ask ‘How do I make a buck with this new thing?’ And when we have tens of millions of people thinking like this, it’s what makes China’s economy work.”

One booming example is Shenzhen.

We often think of “catch-up” regions receiving new gadgets, technologies and intellectual property from “developed” regions on the tail-end of their development. Some U.S. tech giant brimming with capital builds the next big breakthrough, and its wisdom, technology, and know-how slowly trickle down the hierarchy.

Not in Shenzhen. With a mean age of 28.65, Shenzhen residents have local dynamism embedded in their DNA. David Li captures this environment, one in which “people want to grab onto every opportunity [and believe they] have the tenacity to make something happen.”

Growing from a population of 300,000 to 15 million in just 40 years, Shenzhen has registered over 3 million companies. That makes 1 in 5 Shenzhen residents a CEO.

And if that isn’t enough to convince you, look at Shenzhen’s GDP. A mere fishing village in 1987, Shenzhen stood at 0.1 percent of Hong Kong’s GDP. Today? Shenzhen is surging past a US$350 billion GDP, far exceeding its next-door neighbor.

(2) End of a copycat era: But beyond skyrocketing rates of local entrepreneurship, China’s “copycat economy” is long gone. Today, local entrepreneurs have created and iterated upon novel concepts, resulting in markets that far exceed the scale of their American counterparts. 

Take Chinese mobile payments spending, which now exceeds the U.S. by a ratio of 50 to 1. Or video-based social networking apps that are making it big in Western markets (think video-sharing app TikTok, now wildly popular amongst U.S. teens). Shared bicycle networks that span hundreds of millions of users. And gamified, socialized e-commerce that presents one of the biggest playgrounds for AI training on the planet.

Even in legacy markets, China has become a dominant global player, out-diversifying Western counterparts. Take mobile phone shipments worldwide. As explained by David Li, “In 2017, Apple stood at 14%, Samsung at 15%, and [of] everybody else, almost all are Chinese companies. And with the exception of Xioami, everybody else is headquartered in Shenzhen.” This means “one city in China has 70% of the global market share of mobile phones.”

(3) An abundance of capital pouring into AI: Last year, for the first time ever, China surpassed North America in venture capital financing, as Chinese startups raised over US$56 billion in the first half of the year. By end of Q2, Chinese startups accounted for 47% of global VC funding.

Already, Chinese investments in AI, chips and electric vehicles have reached an estimated $300 billion. Meanwhile, AI giant Alibaba has unveiled plans to invest $15 billion in international research labs from the U.S. to Israel, with others following suit.

Just last year alone, nearly 100 Chinese startups hit unicorn status, each reaching a $1 billion valuation. Think about that for a moment. China saw the birth of 1 unicorn almost every 3.6 days.

And even while trade tensions have temporarily slowed investor enthusiasm, groundbreaking startup after startup has sent investment figures booming. Led by FinTech giant Ant Financial (an affiliate of Alibaba that last year raised nearly as much capital as all U.S. and European FinTech companies combined), China’s list of unicorns grows ever longer, currently standing at around 186 tech startups. 

(4) Astronomical quantities of high quality data to train AI algorithms: China not only has three times the available AI-driven mobile platform users as does the U.S., but usage time and real-world, layered data vastly exceed that enjoyed by Western tech giants. Not only do Chinese citizens spend 50 times more than do American counterparts in mobile payments, they order 10 times more in food delivery, and produce about 300 times more real-world movement data through use of shared bicycle platforms (not to mention ridesharing services).

These numbers are notable, yes. But what’s truly remarkable about these statistics is what the data reveal about offline activity.

Whereas most U.S. tech giants enjoy tomes of data about their users’ every click and online glance, it is mobile payments, ridesharing data, and smart city technologies that offer goldmines of real-world information about everyday users. And in China, last year’s mobile payment transactions exceeded the country’s GDP (don’t believe me? Read an explanation here). Credit cards and cash have grown virtually obsolete, and even beggars hold up signs reading, “I’m hungry, scan me.” This massive adoption of AI across all facets of life facilitates unprecedented levels of training and improvement across data-dependent algorithms.

(5) Arguably the most AI-supportive government in the world: Just two years ago, China’s government issued its plan to make China the global center of AI innovation, aiming for a 1 trillion RMB (about $150 billion USD) AI industry by 2030.

And when China’s State Council speaks, everyone listens.

As the nation’s political system incentivizes local officials to outcompete others for leadership in CCP initiatives, positive feedback loops have seen countless government officials luring in AI companies and entrepreneurs with generous subsidies and advantageous policies. Mayors across the country (largely in eastern China) have built out innovation zones, incubators and government-backed VC funds, even covering rent and clearing out avenues for AI startups and accelerators.

Beijing plans to invest $2 billion in an AI development park, which would house up to 400 AI enterprises and a national AI lab, driving R&D, patents and societal innovation. Hangzhou, home to Alibaba’s HQ, has also launched its own AI park, backed by a fund of 10 billion RMB (nearly $1.6 billion USD). But Hangzhou and Beijing are just two of the 19 different cities and provinces investing in AI-driven city infrastructure and policy.

Cities like Xiong’an New Area are building out entire AI metropoles in the next two decades, centered around autonomous vehicles, smart solar panel-embedded roads, and computer vision-geared infrastructure. Projected to take in over $580 billion in infrastructure spending over the next 20 years, Xiong’an has ambitious plans to split its entire downtown in two levels: a top level for parks, trees, pets, kids, bicycles, skateboards, and human pedestrians, and a lower level reserved solely for cars (autonomous, electric vehicles, of course), eliminating the possibility of vehicle-human collisions and multiplying efficiency.

Lastly, local governments have begun to team with China’s leading AI companies to build up party-corporate complexes. Acting as a “national team,” companies like Baidu, Alibaba, Tencent, SenseTime, and iFlyTek collaborate with national organizations like China’s National Engineering Lab for Deep Learning Technologies to pioneer research and supercharge innovation.

Pulling out all the stops, China’s government is flooding the market with AI-targeted funds as Chinese tech giants and adrenalized startups rise to leverage this capital.

Final Thoughts

China’s emergence as a leader in AI sets the stage and offers a tremendous opportunity for U.S.-Chinese collaboration lessons learned, one that is vital for expediting and shaping the direction of future progress.

As the world learns to grapple with a future of dual human-AI intelligence, U.S. and Chinese businesses stand at a critical juncture in history, requiring shared innovation and new modes of cooperation.

Board of Directors | Board of Advisors | Strategic Leadership

Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: Cliff@InvestmentCapitalGrowth.com or Schedule a call: Cliff Locks

artificialintelligence #AI #innovation #HR #executive #business #CXO #CEO #CFO #CIO #executive #success #work #follow #leadership #corporate #office #luxury #entrepreneur #coaching #businessman #professional #aviation #excellence #development Contributor: Peter Diamandis #motivation #InvestmentCapitalGrowth

Networked Vehicles Will Allow for Automated Megacities

Posted by Cliff Locks On March 27, 2019 at 10:08 am / In: Uncategorized

Networked Vehicles Will Allow for Automated Megacities

Tomorrow’s cities are reshaping almost every industry imaginable, and birthing those we’ve never heard of.

Riding an explosion of sensors, megacity AI ‘brains,’ high-speed networks, new materials and breakthrough green solutions, cities are quickly becoming versatile organisms, sustaining and responding to the livelihood patterns of millions.

Over the next decade, cities will revolutionize everything about the way we live, travel, eat, work, learn, stay healthy, and even hydrate.

And countless urban centers, companies, and visionaries are already building out decades-long visions of the future.

Setting its sights on self-sustaining green cities, the UAE has invested record sums in its Vision 2021 plan, while sub-initiatives like Smart Dubai 2021 charge ahead with AI-geared government services, driverless car networks and desalination plants.

A trailblazer of smart governance, Estonia has leveraged blockchain, AI and ultra-high connection speeds to build a new generation of technological statecraft. 

And city states like Singapore have used complex computational models to optimize everything from rainwater capture networks to urban planning, down to the routing of its ocean breeze.

While not given nearly enough credit, the personal vehicle and urban transportation stand at the core of shaping our future cities.

Yet today, your car remains an unused asset about 95 percent of the time.

In highly dense cities like Los Angeles, parking gobbles up almost 15 percent of all urban land area. 

And with a whopping economic footprint, today’s global auto insurance market stands at over $200 billion. 

But the personal vehicle model is on the verge of sweeping disruptions, and tomorrow’s cities will transform right along with it.

Already, driverless cars pose game-changing second-order implications for the next decade. 

Take land use, for instance. By 2035, parking spaces are expected to decline by 5.7 million square meters, a boon for densely packed cities where real estate is worth its area in gold.

Beyond sheer land, a 90 percent driverless car penetration rate could result in $447 billion of projected savings and productivity gains.

But what do autonomous vehicles mean for city planning?

Let’s imagine a 100 percent autonomous vehicle (AV) penetration rate. Cars have reached Level-5 automation, are 100 percent self-driving and can now communicate seamlessly with each other.

With a packing density 8X what it is today in most cities, commutes now take a fraction of the time. Some have even predicted aggregate time savings of over 2.7 billion unproductive hours.

But time savings aside, cars can now be entirely reimagined, serving a dual purpose for sleep, office work, morning calls, time with your kids, you name it.

With plummeting commute times and functional vehicles (think: a mobile office, bed, or social space), cities need no longer be geographically concentrated, allowing you to live well outside the bounds of a business district.

And as AVs give rise to an on-demand, Cars-as-a-Service (CaaS) business model, urban sprawl will enable the flourishing of megacities on an unprecedented scale.

While architects and civil engineers leap to the scene, others are already building out smart network precursors for a future of decentralized vehicles.

Using Narrowband-IoT (NB-IoT) for low power consumption, Huawei has recently launched a smart parking network in Shanghai that finds nearby parking spots for users on the go, allowing passengers to book and pay via smartphone in record time.

In the near future, however, vehicles — not drivers — will book vertically stacked parking spots and charge CaaS suppliers on their own (for storage).

This is where 5G networks come in, driving down latencies between driverless cars, as well as between AVs and their CaaS providers. Using sensor suites and advanced AI, vehicles will make smart transactions in real-time, charging consumers by the minute or mile, notifying manufacturers of wear-and-tear or suboptimal conditions, and even billing for insurance dollars in the now highly unlikely case of a fender-bender.

With an eye to the future, cellular equipment manufacturers are building out the critical infrastructure for these and similar capabilities, embedding chip-sets under parking spaces across Shanghai, each collating and transmitting real-time data on occupancy rates, as the company ramps up its 5G networks.

And Huawei is not alone.

Building out a similar solution is China Unicom, whose smart city projects span the gamut from smart rivers that communicate details of environmental pollution, to IoT and AI-geared drones in agriculture.

Already, China Unicom has established critical communications infrastructure with an NB-IoT network that spans over 300 Chinese cities, additionally deploying eMTC, a lower power wide area technology that leverages existing LTE base stations for IoT support.

Beyond its mobile carriers, however, China has brought together four key private sector players to drive the world’s largest coordinated smart city initiative yet. Announced just last August at China’s Smart City International Expo, the official partnership knights a true power team, composed of Ping An, Alibaba, Tencent, and Huawei (PATH).

With 500 cities under their purview, these tech giants are each tackling a piece of the puzzle.

On the heels of over ten years of research and 50 billion RMB (over US$7.4 billion), Chinese insurance giant Ping An released a white paper addressing smart city strategies across blockchain, biometrics, AI and cloud computing.

Meanwhile, Alibaba plans to embed seamless mobile payments (through AliPay) into the fabric of daily life, as Tencent takes charge of communications and Huawei works on hardware and 5G buildout (not to mention its signature smartphones). 

But it isn’t just driverless vehicles that are changing the game for smart cities.

One of the most advanced city states on the planet, Singapore joins Dubai in envisioning a future of flying vehicles and optimized airway traffic flow. 

As imagined by award-winning architect of Singapore’s first zero-carbon house, Jason Pomeroy, Singapore could in the not-too-distant future explore everything from air rights to flying car structures built above motorways and skyscrapers. 

“Fast-forward 50 years from now. You already see drone technology getting so advanced, [so] why are we not sticking people into those drones. All of a sudden, your sky courts, your sky gardens, even your private terraces to your condo [become] landing platform[s] for your own personalized drone.”

Goodyear’s Concept Urban Aerial Mobility Ecosystem

Already, Singapore’s government is bolstering advanced programs to test drone capacity limits, with automated routing and private sector innovation. Most notably, Airbus’ ‘Skyways’ venture has begun building out its vision for urban air mobility in Singapore, where much of the company’s testing has taken place.

Yet, as megacities attract millions of new residents from across the planet, building out smart networks for autonomous and flying vehicles, one of our greatest priorities becomes smart city governance.

Smart Public Services & Optimized Urban Planning 

With the rise of urbanization, I’m led to the conclusion that megacities will become the primary nodes of data acquisition, data integration and thereby the primary mechanism of governance.

In just over 10 years, the UN forecasts that around 43 cities will house over 10 million residents each. Autonomous and flying cars, delocalized work and education, and growing urban populations are all beginning to transform cities into interconnected, automated ecosystems, sprawled over vast swaths of geography. 

Now more than ever, smart public services and automated security will be needed to serve as the glue that holds these megacities together. Public sector infrastructure and services will soon be hosted on servers, detached from land and physical form. And municipal governments will face the scale of city states, propelled by an upwards trend in sovereign urban hubs that run almost entirely on their own.

Take e-Estonia.

Perhaps the least expected on a list of innovative nations, this former Soviet Republic-turned digital society is ushering in an age of technological statecraft.

Hosting every digitizable government function on the cloud, Estonia could run its government almost entirely on a server.

Starting in the 1990s, Estonia’s government has covered the nation with ultra-high-speed data connectivity, laying down tremendous amounts of fiber-optic cable. By 2007, citizens could vote from their living rooms.

With digitized law, Estonia signs policies into effect using cryptographically secure digital signatures, and every stage of the legislative process is available to citizens online, including plans for civil engineering projects. 

But it doesn’t stop there.

Citizens’ healthcare registry is run on the blockchain, allowing patients to own and access their own health data from anywhere in the world — X-rays, digital prescriptions, medical case notes — all the while tracking who has access.

And i-Voting, civil courts, land registries, banking, taxes, and countless e-facilities allow citizens to access almost any government service with an electronic ID and personal PIN online. 

But perhaps Estonia’s most revolutionary breakthrough is its recently introduced e-citizenship. 

With over 50,000 e-residents from across 157 countries, Estonia issues electronic IDs to remote ‘inhabitants’ anywhere in the world, changing the nature of city borders themselves. While e-residency doesn’t grant territorial rights, over 6,000 e-residents have already established companies within Estonia’s jurisdiction.

From start to finish, the process takes roughly three hours, and 98 percent of businesses are all established online, offering data security, offshore benefits, and some of the most efficient taxes on the planet. 

After companies are registered online, taxes are near-entirely automated — calculated in minutes and transmitted to the Estonian government with unprecedented ease.

The implications of e-residency and digital governance are huge. As with any software, open-source code for digital governance could be copied perfectly at almost zero cost, lowering the barrier to entry for any megacity or village alike seeking its own urban e-services.

As Peter Diamandis good friend David Li often advocates, he’s seen thriving village startup ecosystems and e-commerce hotbeds take off throughout China’s countryside, resulting in the mass movement and meteoric rise of ‘Taobao Villages.’

As smart city governance becomes democratized, what’s to stop these or any other town from building out or even duplicating e-services?

But Estonia is not the only one pioneering rapid-fire government uses of blockchain technology. 

Within the next year, Dubai aims to become the first city powered entirely by the Blockchain, a long-standing goal of H.H. Sheikh Mohammed bin Rashid Al Maktoum.

Posing massive savings, government adoption of blockchain not only stands to save Dubai over 5.5 billion dirham (or nearly US$1.5 billion), but is intended to roll out everything from a citywide cryptocurrency emCash, to an RTA-announced blockchain-based vehicle monitoring system.

Possibly a major future smart city staple, systems similar to this latter blockchain-based network could one day underpin AVs, flying taxis and on-demand Fly-as-a-Service personal drones.

With a similar mind to Dubai, multiple Chinese smart city pilots are quickly following suit.

Almost two years ago, China’s central government and President Xi Jinping designated a new megalopolis spanning three counties and rivaling almost every other Chinese special economic zone: Xiong’an New Area.

Deemed a “crucial [strategy] for the millennium to come,” Xiong’an is slated to bring in over 2.4 trillion RMB (a little over US$357 billion) in investment over the next decade, redirecting up to 6.7 million people and concentrating supercharged private sector innovation.

And forging a new partnership, Xiong’an plans to work in direct consultation with ConsenSys on ethereum-based platforms for infrastructure and any number of smart city use cases. Beyond blockchain, Xiong’an will rely heavily on AI and has even posited plans for citywide cognitive computing.

But any discussion of smart government services would be remiss without mention of Singapore.

One of the most resourceful, visionary megacities on the planet, Singapore has embedded advanced computational models and high-tech solutions in everything from urban planning to construction of its housing units.

Responsible for creating living spaces for nearly 80 percent of its residents (through government-provided housing), the nation’s Housing and Development Board (HBD) stands as an exemplar of disruptive government.

Singapore uses sophisticated computer models, enabling architects across the board to build environmentally optimized living and city spaces. Take Singapore’s simulated ocean breeze for optimized urban construction patterns. 

As explained by HBD’s CEO Dr. Cheong Koon Hean, “Singapore is in the tropics, so we want to encourage the breezes to come through. Through computer simulation, you can actually position the blocks[,] public spaces [and] parks in such a way that help[s] you achieve this.”

National Geographic

And beyond its buildings, Singapore uses intricate, precision-layered infrastructure for essential services, down to water and electrical tunnels, commercial spaces underground, and complex transportation networks all beneath the city surface. 

Even in the realm of feeding its citizens, Singapore is fast becoming a champion of vertical farming. It opened the world’s first commercial vertical farm over 6 years ago, aiming to feed the entire island nation with a fraction of the land use. 

Whether giving citizens a vote on urban planning with the click of a button, or optimizing environmental conditions through public housing and commercial skyscrapers, smart city governance is a key pillar of the future.

Visions of the Future

Bringing together mega-economies, green city infrastructure and e-services that decimate inefficiency, future transportation and web-based urban services will shape how and where we live, on unthinkable dimensions.

Networked drones, whether personal or parcel deliveries, will circle layered airways, all operated using AI city brains and blockchain-based data infrastructures. Far below, driverless vehicles will give rise to on-demand Cars-as-a-Service, sprawling cities and newly unlocked real estate. And as growing megacities across the world begin grappling with next-gen technologies, who knows how many whimsical city visions and architectural plans will populate the Earth — and one day, even space.

Please keep me in mind as your life coach, openings for senior executive engagements, and board openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: Cliff@InvestmentCapitalGrowth.com or Schedule a call: Cliff

#innovation #engineer #engineering #tech #technology #artificialintelligence #AI #executive #business #CXO #CEO #executive #success #work #follow #leadership #travel #corporate #office #luxury #entrepreneur #coaching #businessman #professional #aviation #excellence #development #motivation

Contributor: Peter Diamandis