Future of Virtual Reality moving from deceptive to disruptive

In 2016, venture investments in VR exceeded US$800 million, while AR and MR received a total of $450 million. Just a year later, investments in AR and VR startups doubled to US$3.6 billion.
And today, major players are bringing VR headsets to market that have the power to revolutionize the industry, as well as countless others.
Already, VR headset sales volumes are expected to reach 98.4 million by 2023, according to Futuresource Consulting. But beyond headsets themselves, Facebook’s $399 Oculus Quest brought in US$5 million in content sales within the first two weeks post-release this past spring.
With companies like Niantic ($4B valuation), Improbable ($2B valuation), and Unity ($6B valuation) achieving unicorn status in recent years, the VR space is massively heating up.
In this blog, we will dive into a brief history of VR, recent investment surges, and the future of this revolutionary technology.
Brief History of VR
For all of history, our lives have been limited by the laws of physics and mitigated by the five senses. VR is rewriting those rules.
It’s letting us digitize experiences and teleport our senses into a computer-generated world where the limits of imagination become the only brake on reality. But it’s taken a while to get here.
Much like AI, the concept of VR has been around since the 60s. The 1980s saw the first false dawn, when the earliest “consumer-facing” systems began to show up. In 1989, if you had a spare $250,000, you could purchase the EyePhone before the iPhone, a VR system built by Jaron Lanier’s company VPL (Lanier coined the term ‘virtual reality’).
Unfortunately, the computer that powered that system was the size of a dorm room refrigerator, while the headset it required was bulky, awkward and only generated about five frames a second—six times slower than the average television of that era.
By the early 1990s, the hype had faded and VR entered a two-decade deceptive phase. Through the 2000s, the convergence of increasingly powerful game engines and AI-image rendering software flipped the script. Suddenly, deceptive became disruptive and the VR universe opened for business.
The Disruptive Phase: Surges in VR Investment
In 2012, Facebook spent $2 billion on Oculus Rift. By 2015, Venture Beat reported that an arena which typically saw only ten new entrants a year, suddenly had 234.
In June 2016, HTC announced the release of its ‘Business Edition’ of the Vive for $1,200, followed six months later by their announcement of a tether-less VR upgrade.
A year later, Samsung cashed in on this shift, selling 4.3 million headsets and turning enough heads that everyone from Apple and Google, to Cisco and Microsoft decided to investigate VR.
Phone-based VR showed up soon afterwards, dropping barriers to entry as low as $5. By 2018, the first wireless adaptors, standalone headsets and mobile headsets hit the market.
Resolution-wise, 2018 was also when Google and LG doubled their pixels-per-inch count and increased their refresh rate from VPL’s five frames a second to over 120.
Around the same time, the systems began targeting more senses than just vision. HEAR360’s “omni-binaural” microphone suite captures 360 degrees of audio, which means immersive sound has now caught up to immersive visuals.
Touch has also reached the masses, with haptic gloves, vests and full body suits hitting the consumer market. Scent emitters, taste simulators, and every kind of sensor imaginable—including brainwave readers—are all trying to put the “very” into verisimilitude.
And the number of virtual explorers continues to mount. In 2017, there were 90 million active users, which nearly doubled to 171 million by 2018. YouTube’s VR channel has over three million subscribers.
And that number is growing. By 2020, estimates put the VR market at $30 billion, and it’s hard to find a field that will be left untouched.
Future of VR: Emotive and Immersive Education
History class, 2030. This week’s lesson: Ancient Egypt. The pharaohs, the queens, the tombs—the full Tut.
Sure, you’d love to see the pyramids in person. But the cost of airfare? Hotel rooms for the entire class? Plus, taking two weeks off from school for the trip? None of these things are doable. Worse, even if you could go, you couldn’t go. Many of Egypt’s tombs are closed for repairs, and definitely off-limits to a group of teenagers.
Not to worry, VR solves these problems. And in VR world, you and your classmates can easily breach Queen Nefertari’s burial chamber, touch the hieroglyphics, even scramble atop her sarcophagus—impossible opportunities in physical reality. You also have a world-class Egyptologist as your guide.
But turning your attention to the back of the tomb doesn’t require waiting until 2030. In 2018, Philip Rosedale and his team at High Fidelity pulled off this exact virtual field trip.
First, they 3D-laser scanned every square inch of Queen Nefertari’s tomb. Next, they shot thousands of high resolution photos of the burial chamber. By stitching together more than ten thousand photos into a single vista, then laying that vista atop their 3D-scanned map, Rosedale created a stunningly accurate virtual tomb. Next, he gave a classroom full of kids HTC Vive VR headsets.
Because High Fidelity is a social VR platform, meaning multiple people can share the same virtual space at the same time, the entire class was able to explore that tomb together. In total, their fully immersive field trip to Egypt required zero travel time, zero travel expenses.
VR will not only cover traditional educational content, but also expand our emotional education.
Jeremy Bailenson, founding director of Stanford’s Virtual Human Interaction Lab, has spent two decades exploring VR’s ability to produce real behavioral change. He’s developed first-person VR experiences of racism, sexism, and other forms of discrimination.
For example, experiencing what it would be like to be an elderly, homeless, African American woman living on the streets of Baltimore produces real change: A significant shift in empathy and understanding.
“Virtual reality is not a media experience,” explains Bailenson. “When it’s done well, it’s an actual experience. In general our findings show that VR causes more behavior changes, causes more engagement, causes more influence than other types of traditional media.”
Nor is empathy the only emotion VR appears capable of training. In research conducted at USC, psychologist Skip Rizzo has had considerable success using virtual reality to treat PTSD in soldiers. Other scientists have extended this to the full range of anxiety disorders.
VR, especially when combined with AI, has the potential to facilitate a top shelf traditional education, plus all the empathy and emotional skills that traditional education has long been lacking.
When AI and VR converge with wireless 5G networks, our global education problem moves from the nearly impossible challenge of finding teachers and funding schools for the hundreds of millions in need, to the much more manageable puzzle of building a fantastic digital education system that we can give away for free to anyone with a headset. It’s quality and quantity on demand.
In the workplace, VR will serve as an efficient trainer for new employees.
10,000 of Walmart’s 1.2 million employees have taken VR-based skills management tests. Learning modules that once took 35 to 45 minutes, now take 3 to 5. The company plans to train 1 million employees using the Oculus VR headset by the end of this year. The upfront costs of VR headsets will ultimately be recovered in labor efficiencies.
Multiple Worlds, Multiple Economies
We no longer live in only one place. We have real-world personae and online personae. This delocalized existence is only going to expand. With the rise of AR and VR, we’re introducing more layers to this equation.
You’ll have avatars for work and avatars for play and all of these versions of ourselves are opportunities for new businesses. Consider the multi-million-dollar economy that sprung up around the very first virtual world, Second Life. People were paying other people to design digital clothes and digital houses for their digital avatars.
Every time we add a new layer to the digital strata, we’re also adding an entire economy built upon that layer, meaning we are now conducting our business in multiple worlds at once.
Reserve Peter Diamandis next book. If you’ve enjoyed the above blog much of it came from his up coming book The Future is Faster Than You Think and want to be notified when it comes out and get special offers (signed copies, free stuff, etc.), then register here to get early bird updates on the book and learn more!

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #AR #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO #BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
How AR, AI, Sensors & Blockchain are Merging Into Web 3.0
How AR, AI, Sensors & Blockchain are Merging Into Web 3.0

How each of us sees the world is about to change dramatically…
For all of human history, the experience of looking at the world was roughly the same for everyone. But boundaries between the digital and physical are beginning to fade.
The world around us is gaining layer upon layer of digitized, virtually overlaid information — making it rich, meaningful, and interactive. As a result, our respective experiences of the same environment are becoming vastly different, personalized to our goals, dreams, and desires.
Welcome to Web 3.0, aka The Spatial Web. In version 1.0, static documents and read-only interactions limited the internet to one-way exchanges. Web 2.0 provided quite an upgrade, introducing multimedia content, interactive web pages, and participatory social media. Yet, all this was still mediated by 2D screens.
And today, we are witnessing the rise of Web 3.0, riding the convergence of high-bandwidth 5G connectivity, rapidly evolving AR eyewear, an emerging trillion-sensor economy, and ultra-powerful AIs.
As a result, we will soon be able to superimpose digital information atop any physical surrounding—freeing our eyes from the tyranny of the screen, immersing us in smart environments, and making our world endlessly dynamic.
In this third blog of our five-part series on augmented reality, we will explore the convergence between AR, AI, sensors, and blockchain, diving into the implications through a key use case in manufacturing.
A Tale of Convergence
Let’s deconstruct everything beneath the sleek AR display.
It all begins with Graphics Processing Units (GPUs) — electric circuits that perform rapid calculations to render images. (GPUs can be found in mobile phones, game consoles, and computers.)
However, because AR requires such extensive computing power, single GPUs will not suffice. Instead, blockchain can now enable distributed GPU processing power, and blockchains specifically dedicated to AR holographic processing are on the rise.
Next up, cameras and sensors will aggregate real-time data from any environment to seamlessly integrate physical and virtual worlds. Meanwhile, body-tracking sensors are critical for aligning a user’s self-rendering in AR with a virtually enhanced environment. Depth sensors then provide data for 3D spatial maps, while cameras absorb more surface-level, detailed visual input. In some cases, sensors might even collect biometric data, such as heart rate and brain activity, to incorporate health-related feedback in our everyday AR interfaces and personal recommendation engines.
The next step in the pipeline involves none other than AI. Processing enormous volumes of data instantaneously, embedded AI algorithms will power customized AR experiences in everything from artistic virtual overlays to personalized dietary annotations.
In retail, AIs will use your purchasing history, current closet inventory, and possibly even mood indicators to display digitally rendered items most suitable for your wardrobe, tailored to your measurements.
In healthcare, smart AR glasses will provide physicians with immediately accessible and maximally relevant information (parsed from the entirety of a patient’s medical records and current research) to aid in accurate diagnoses and treatments, freeing doctors to engage in the more human-centric tasks of establishing trust, educating patients and demonstrating empathy.
Convergence in Manufacturing
One of the nearest-term use cases of AR is manufacturing, as large producers begin dedicating capital to enterprise AR headsets. And over the next ten years, AR will converge with AI, sensors, and blockchain to multiply manufacturer productivity and employee experience.
(1) Convergence with AI
In initial application, digital guides superimposed on production tables will vastly improve employee accuracy and speed, while minimizing error rates.
Already, the International Air Transport Association (IATA) — whose airlines supply 82 percent of air travel — recently implemented industrial tech company Atheer’s AR headsets in cargo management. And with barely any delay, IATA reported a whopping 30 percent improvement in cargo handling speed and no less than a 90 percent reduction in errors.
With similar success rates, Boeing brought Skylight’s smart AR glasses to the runway, now used in the manufacturing of hundreds of airplanes. Sure enough—the aerospace giant has now seen a 25 percent drop in production time and near-zero error rates.
Beyond cargo management and air travel, however, smart AR headsets will also enable on-the-job training without reducing the productivity of other workers or sacrificing hardware. Jaguar Land Rover, for instance, implemented Bosch’s Re’flekt One AR solution to gear technicians with “x-ray” vision: allowing them to visualize the insides of Range Rover Sport vehicles without removing any dashboards.
And as enterprise capabilities continue to soar, AIs will soon become the go-to experts, offering support to manufacturers in need of assembly assistance. Instant guidance and real-time feedback will dramatically reduce production downtime, boost overall output, and even help customers struggling with DIY assembly at home.
Perhaps one of the most profitable business opportunities, AR guidance through centralized AI systems will also serve to mitigate supply chain inefficiencies at extraordinary scale. Coordinating moving parts, eliminating the need for manned scanners at each checkpoint, and directing traffic within warehouses, joint AI-AR systems will vastly improve workflow while overseeing quality assurance.
After its initial implementation of AR “vision picking” in 2015, leading courier company DHL recently announced it would continue to use Google’s newest smart lens in warehouses across the world. Motivated by the initial group’s reported 15 percent jump in productivity, DHL’s decision is part of the logistics giant’s $300 million investment in new technologies.
And as direct-to-consumer e-commerce fundamentally transforms the retail sector, supply chain optimization will only grow increasingly vital. AR could very well prove the definitive step for gaining a competitive edge in delivery speeds.
As explained by Vital Enterprises CEO Ash Eldritch, “All these technologies that are coming together around artificial intelligence are going to augment the capabilities of the worker and that’s very powerful. I call it Augmented Intelligence. The idea is that you can take someone of a certain skill level and by augmenting them with artificial intelligence via augmented reality and the Internet of Things, you can elevate the skill level of that worker.”
Already, large producers like Goodyear, thyssenkrupp, and Johnson Controls are using the Microsoft HoloLens 2—priced at $3,500 per headset—for manufacturing and design purposes.
Perhaps the most heartening outcome of the AI-AR convergence is that, rather than replacing humans in manufacturing, AR is an ideal interface for human collaboration with AI. And as AI merges with human capital, prepare to see exponential improvements in productivity, professional training, and product quality.
(2) Convergence with Sensors
On the hardware front, these AI-AR systems will require a mass proliferation of sensors to detect the external environment and apply computer vision in AI decision-making.
To measure depth, for instance, some scanning depth sensors project a structured pattern of infrared light dots onto a scene, detecting and analyzing reflected light to generate 3D maps of the environment. Stereoscopic imaging, using two lenses, has also been commonly used for depth measurements. But leading technology like Microsoft’s HoloLens 2 and Intel’s RealSense 400-series camera implement a new method called “phased time-of-flight” (ToF).
In ToF sensing, the HoloLens 2 uses numerous lasers, each with 100 milliwatts (mW) of power, in quick bursts. The distance between nearby objects and the headset wearer is then measured by the amount of light in the return beam that has shifted from the original signal. Finally, the phase difference reveals the location of each object within the field of view, which enables accurate hand-tracking and surface reconstruction.
With a far lower computing power requirement, the phased ToF sensor is also more durable than stereoscopic sensing, which relies on the precise alignment of two prisms. The phased ToF sensor’s silicon base also makes it easily mass-produced, rendering the HoloLens 2 a far better candidate for widespread consumer adoption.
To apply inertial measurement—typically used in airplanes and spacecraft—the HoloLens 2 additionally uses a built-in accelerometer, gyroscope, and magnetometer. Further equipped with four “environment understanding cameras” that track head movements, the headset also uses a 2.4MP HD photographic video camera and ambient light sensor that work in concert to enable advanced computer vision.
For natural viewing experiences, sensor-supplied gaze tracking increasingly creates depth in digital displays. Nvidia’s work on Foveated AR Display, for instance, brings the primary foveal area into focus, while peripheral regions fall into a softer background— mimicking natural visual perception and concentrating computing power on the area that needs it most.
Gaze tracking sensors are also slated to grant users control over their (now immersive) screens without any hand gestures. Conducting simple visual cues, even staring at an object for more than three seconds, will activate commands instantaneously.
And our manufacturing example above is not the only one. Stacked convergence of blockchain, sensors, AI and AR will disrupt almost every major industry.
Take healthcare, for example, wherein biometric sensors will soon customize users’ AR experiences. Already, MIT Media Lab’s Deep Reality group has created an underwater VR relaxation experience that responds to real-time brain activity detected by a modified version of the Muse EEG. The experience even adapts to users’ biometric data, from heart rate to electro dermal activity (inputted from an Empatica E4 wristband).
Now rapidly dematerializing, sensors will converge with AR to improve physical-digital surface integration, intuitive hand and eye controls, and an increasingly personalized augmented world. Keep an eye on companies like MicroVision, now making tremendous leaps in sensor technology.
While I’ll be doing a deep dive into sensor applications across each industry in our next blog, it’s critical to first discuss how we might power sensor- and AI-driven augmented worlds.
(3) Convergence with Blockchain
Because AR requires much more compute power than typical 2D experiences, centralized GPUs and cloud computing systems are hard at work to provide the necessary infrastructure. Nonetheless, the workload is taxing and blockchain may prove the best solution.
A major player in this pursuit, Otoy aims to create the largest distributed GPU network in the world, called the Render Network RNDR. Built specifically on the Ethereum blockchain for holographic media, and undergoing Beta testing, this network is set to revolutionize AR deployment accessibility.
Alphabet Chairman Eric Schmidt (an investor in Otoy’s network), has even said, “I predicted that 90% of computing would eventually reside in the web based cloud… Otoy has created a remarkable technology which moves that last 10%—high-end graphics processing—entirely to the cloud. This is a disruptive and important achievement. In my view, it marks the tipping point where the web replaces the PC as the dominant computing platform of the future.”
Leveraging the crowd, RNDR allows anyone with a GPU to contribute their power to the network for a commission of up to $300 a month in RNDR tokens. These can then be redeemed in cash or used to create users’ own AR content.
In a double win, Otoy’s blockchain network and similar iterations not only allow designers to profit when not using their GPUs, but also democratize the experience for newer artists in the field.
And beyond these networks’ power suppliers, distributing GPU processing power will allow more manufacturing companies to access AR design tools and customize learning experiences. By further dispersing content creation across a broad network of individuals, blockchain also has the valuable potential to boost AR hardware investment across a number of industry beneficiaries.
On the consumer side, startups like Scanetchain are also entering the blockchain-AR space for a different reason. Allowing users to scan items with their smartphone, Scanetchain’s app provides access to a trove of information, from manufacturer and price, to origin and shipping details.
Based on NEM (a peer-to-peer cryptocurrency that implements a blockchain consensus algorithm), the app aims to make information far more accessible and, in the process, create a social network of purchasing behavior. Users earn tokens by watching ads, and all transactions are hashed into blocks and securely recorded.
The writing is on the wall—our future of brick-and-mortar retail will largely lean on blockchain to create the necessary digital links.
Final Thoughts
Integrating AI into AR creates an “auto-magical” manufacturing pipeline that will fundamentally transform the industry, cutting down on marginal costs, reducing inefficiencies and waste, and maximizing employee productivity.
Bolstering the AI-AR convergence, sensor technology is already blurring the boundaries between our augmented and physical worlds, soon to be near-undetectable. While intuitive hand and eye motions dictate commands in a hands-free interface, biometric data is poised to customize each AR experience to be far more in touch with our mental and physical health.
And underpinning it all, distributed computing power with blockchain networks like RNDR will democratize AR, boosting global consumer adoption at plummeting price points.
As AR soars in importance—whether in retail, manufacturing, entertainment, or beyond—the stacked convergence discussed above merits significant investment over the next decade. Already, 52 Fortune 500 companies have begun testing and deploying AR/VR technology. And while global revenue from AR/VR stood at $5.2 billion in 2016, market intelligence firm IDC predicts the market will exceed $162 billion in value by 2020.
The augmented world is only just getting started.

Board of Directors | Board of Advisors | Strategic Leadership
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
#BoardofDirectors #BoD #artificialintelligence #AI #innovation #IoT #virtualreality #vr #d #augmentedreality #HR #executive #business #CXO #CEO #CFO #CIO BoardofDirectors #executive #success #work #follow #leadership #Engineering #corporate #office #Biotech #Cleantech #CAD #entrepreneur #coaching #businessman #professional #excellence #development #motivation Contributors: Peter Diamandis and Clifford Locks #InvestmentCapitalGrowth
The Future of Entertainment. I think you’ll be surprised!
The Future of Entertainment. I think you’ll be surprised!
Twenty years ago, entertainment was dominated by a handful of producers and monolithic broadcasters, a near-impossible market to break into. Today, the industry is almost entirely dematerialized, while storytellers and storytelling mediums explode in number. And this is just the beginning.
Netflix turned entertainment on its head practically overnight, shooting from a market cap of US$8 billion in 2010 (the same year Blockbuster filed for bankruptcy) to a record US$185.6 billion only 8 years later. This year, it is expected to spend a whopping 15 billion on content alone.
Meanwhile, VR platforms like Google’s Daydream and Oculus have only begun bringing the action to you, while mixed reality players like Dreamscape will forever change the way we experience stories, exotic environments and even classrooms of the future.
In the words of Barry Diller, a former Fox and Paramount executive and the chairman of IAC, “Hollywood is now irrelevant.”
In this two-part series, I’ll be diving into three future trends in the entertainment industry: AI-based content curation, participatory story-building, and immersive VR/AR/MR worlds.
Today, I’ll be exploring the creative future of AI’s role in generating on-demand, customized content and collaborating with creatives, from music to film, in refining their craft.
Let’s dive in!
AI Entertainment Assistants
For many of us, film brought to life our conceptions of AI, from Marvel’s JARVIS to HAL in 2001: A Space Odyssey.
And now, over 50 years later, AI is bringing stories to life like we’ve never seen before.
Converging with the rise of virtual reality and colossal virtual worlds, AI has begun to create vastly detailed renderings of dead stars, generate complex supporting characters with intricate story arcs, and even bring your favorite stars — whether Marlon Brando or Amy Winehouse — back to the big screen and into a built environment.
While still in its nascent stages, AI has already been used to embody virtual avatars that you can converse with in VR, soon to be customized to your individual preferences.
But AI will have far more than one role in the future of entertainment as industries converge atop this fast-moving arena.
You’ve likely already seen the results of complex algorithms that predict the precise percentage likelihood you’ll enjoy a given movie or TV series on Netflix, or recommendation algorithms that queue up your next video on YouTube. Or think Spotify playlists that build out an algorithmically refined, personalized roster of your soon-to-be favorite songs.
And AI entertainment assistants have barely gotten started.
Currently the aim of AIs like Google’s Assistant or Huawei’s Xiaoyi (a voice assistant that lives inside Huawei’s smartphones and smart speaker AI Cube), AI advancements will soon enable your assistant to search and select songs based on your current and desired mood, movies carefully picked out to bridge you and your friends’ watching preferences on a group film night, or even games whose characters are personalized to interact with you as you jump from level to level.
Or even imagine your own home leveraging facial technology to assess your disposition, cross-reference historical data on your entertainment choices at a given time or frame of mind, and automatically queue up a context-suiting song or situation-specific video for comic relief.
Curated Content Generators
Beyond personalized predictions, however, AIs are now taking on content generation, multiplying your music repertoire, developing entirely new plotlines, and even bringing your favorite actors back to the screen or — better yet — directly into your living room.
Take AI motion transfer, for instance.
Employing the machine learning subset of generative adversarial networks (GAN), a team of researchers at UC Berkeley has now developed an AI motion transfer technique that superimposes the dance moves of professionals onto any amateur (‘target’) individual in seamless video.
By first mapping the target’s movements onto a stick figure, Caroline Chan and her team create a database of frames, each frame associated with a stick-figure pose. They then use this database to train a GAN and thereby generate an image of the target person based on a given stick-figure pose.
Map a series of poses from the source video to the target, frame-by-frame, and soon anyone might moonwalk like Michael Jackson, glide like Ginger Rogers or join legendary dancers on a virtual stage.
Somewhat reminiscent of AI-generated “deepfakes,” the use of generative adversarial networks in film could massively disrupt entertainment, bringing legendary performers back to the screen and granting anyone virtual stardom.
Just as digital artists increasingly enhance computer-generated imagery (CGI) techniques with high-fidelity 3D scanning for unprecedentedly accurate rendition of everything from pores to lifelike hair textures, AI is about to give CGI a major upgrade.
Fed countless hours of footage, AI systems can be trained to refine facial movements and expressions, replicating them on any CGI model of a character, whether a newly generated face or iterations of your favorite actors.
Want Marilyn Monroe to star in a newly created Fast and Furious film? No problem! Keen to cast your brother in one of the original Star Wars movies? It might soon be as easy as contracting an AI to edit him in, ready for his next Jedi-themed birthday.
Companies like Digital Domain, co-founded by James Cameron, are hard at work to pave the way for such a future. Already, Digital Domain’s visual effects artists employ proprietary AI systems to integrate humans into CGI character design with unparalleled efficiency.
As explained by Digital Domain’s Digital Human Group director Darren Handler, “We can actually take actors’ performances — and especially facial performances — and transfer them [exactly] to digital characters.
And this weekend, AI-CGI cooperation took center stage in Avengers: Endgame, seamlessly recreating facial expressions on its villain Thanos.
Even in the realm of video games, upscaling algorithms have been used to revive childhood classic video games, upgrading low-resolution features with striking new graphics.
One company that has begun commercializing AI upscaling techniques is Topaz Labs. While some manual craftsmanship is required, the use of GANs has dramatically sped up the process, promising extraordinary implications for gaming visuals.
But how do these GANs work? After training a GAN on millions of pairs of low-res and high-res images, one part of the algorithm attempts to build a high-resolution frame from its low-resolution counterpart, while the second algorithm component evaluates this output. And as the feedback loop of generation and evaluation drives the GAN’s improvement, the upscaling process only gets more efficient over time.
“After it’s seen these millions of photos many, many times it starts to learn what a high resolution image looks like when it sees a low resolution image,” explained Topaz Labs CTO Albert Yang.
Imagine a future in which we might transform any low-resolution film or image with remarkable detail at the click of a button.
But it isn’t just film and gaming that are getting an AI upgrade. AI songwriters are now making a major dent in the music industry, from personalized repertoires to melody creation.
AI Songwriters and Creative Collaborators
While not seeking to replace your favorite song artists, AI startups are leaping onto the music scene, raising millions in VC investments to assist musicians with creation of novel melodies and underlying beats… and perhaps one day with lyrics themselves.
Take Flow Machines, a songwriting algorithm already in commission. Now used by numerous musical artists as a creative assistant, Flow Machines has even made appearances on Spotify playlists and top music charts.
And startups are fast following suit, including Amper, Popgun, Jukedeck and Amadeus Code.
But how do these algorithms work? By processing thousands of genre-specific songs or an artist’s genre-mixed playlist, songwriting algorithms are now capable of optimizing and outputting custom melodies and chord progressions that interpret a given style. These in turn help human artists refine tunes, derive new beats, and ramp up creative ability at scales previously unimaginable.
As explained by Amadeus Code’s founder Taishi Fukuyama, “History teaches us that emerging technology in music leads to an explosion of art. For AI songwriting, I believe [it’s just] a matter of time before the right creators congregate around it to make the next cultural explosion.”
Envisioning a future wherein machines form part of the creation process, Will.i.am has even described a scenario in which he might tell his AI songwriting assistant, “Give me a shuffle pattern, and pull up a bass line, and give me a Bootsy Collins feel…”
AI: The Next Revolution in Creativity
Over the next decade, entertainment will undergo its greatest revolution yet. As AI converges with VR and crashes into democratized digital platforms, we will soon witness the rise of everything from edu-tainment, to interactive game-based storytelling, to immersive worlds, to AI characters and plot lines created on-demand, anywhere, for anyone, at almost zero cost.
We’ve already seen the dramatic dematerialization of entertainment. Streaming has taken the world by storm, as democratized platforms and new broadcasting tools birth new convergence between entertainment and countless other industries.
Posing the next major disruption, AI is skyrocketing to new heights of creative and artistic capacity, multiplying content output and allowing any artist to refine their craft, regardless of funding, agencies or record deals.
And as AI advancements pick up content generation and facilitate creative processes on the back end, virtual worlds and AR/VR hardware will transform our experience of content on the front-end.

In our next blog of the series, we’ll dive into mixed reality experiences, VR for collaborative storytelling, and AR interfaces that bring location-based entertainment to your immediate environment.
Please keep me in mind as your Executive Coach, openings for Senior Executive Engagements, and Board of Director openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way. Email me: [email protected] or Schedule a call: Cliff Locks
Contributor: Peter Diamandis #innovation #invent #inventor #engineer #Entrepreneur #AI #ArtificialIntelligence #VC #WSJ