The Investment Capital Growth Blog

Welcome To The ICG Blog

Strategic Insights For Business Leaders & Their Teams

Investment Capital Growth is dedicated to the personal and professional development of C-Level Executives and the management teams that run modern business. Our blog shares insights and strategies culled from years of entrepreneural and executive experience. Our thought leaders regularly publish business articles to inspire and empower.

Get Inspired, Stay Connected:

  • Subscribe To Our Blog For Updates
  • Follow ICG on Social Media
  • Engage Our Consultants

Subscribe To The ICG Blog

Oops! We could not locate your form.

Posts by Topic

How to combine easy to create Crowd Solution with AI to acheive superhuman performance levels

Posted by Cliff Locks On January 24, 2019 at 11:45 am / In: Uncategorized

How to combine easy to create Crowd Solution with AI to acheive superhuman performance levels

AI and the Crowd: A Collaboration

This article will provide you live examples of AI and the Crowd, creating an effective collaboration. We start with the field of Biology. The machinery of biology is built from proteins, and a protein’s shape defines its function.

One of the most challenging (and consequential) problems in modern medicine revolves around predicting the structure of proteins based on their amino acid sequences.

The human body can make vast numbers of different proteins, with estimates ranging in the tens of thousands. How a protein folds into a 3D structure depends on the number and types of amino acids it contains.

Normally, proteins take on whatever shape is most energy efficient, but they can become tangled and mis-folded, leading to disorders such as Parkinson’s and Alzheimer’s disease.

A protein can twist and bend between each amino acid, so that a protein with hundreds of amino acids has the potential to take on a staggering number of different structures: 1 followed by 300 zeroes (i.e. more possible ways to fold than there are atoms in the universe).

Understanding the mechanics of protein folding would be a boon to medicine and drug discovery. Yet up until a decade ago, figuring out which of the many folding configurations a protein would take was a problem relegated (in vain) to the best of supercomputers….

Until one team of researchers at the University of Washington launched an online game called FoldIt.

FoldIt: The Crowd Solution

Aiming to crowdsource this seemingly impossible ‘puzzle,’ FoldIt gives players a sequence of amino acids, which users can then experiment with and fold into any number of structures.

More than just a number-crunching task, finding the right folding combo is in part a game of intuition. And in less than a few weeks, FoldIt proved that the human brain’s pattern recognition capacity in aggregate could outperform even the most complex of computer programs.

Within no time of FoldIt’s release in 2008, tens of thousands of online gamers signed up to compete — young, old, individual puzzle players, and even competitive teams of aspiring scientists.

Their goal? Click, drag and pull a given protein’s chains into configurations that minimize energy, just as molecules self-assemble in real life.

Seeing the bigger picture, leveraging spatial reasoning, and following gut instincts on the basis of ‘what doesn’t look right,’ the crowd consistently outperformed its software counterpart, yielding tremendous contributions to Alzheimer’s and cancer research.

And in 2011, the FoldIt community landed a key victory in the fight against HIV/AIDS.

Over the course of the preceding decade, researchers had struggled with countless methods to determine the structure of a retroviral protease of the Mason-Pfizer monkey virus, a critical enzyme in the replication of HIV. Yet failure after failure left top scientists baffled, unable to solve the protein’s crystal structure.

In a last-ditch attempt to leverage the crowd, one Polish scientist turned to FoldIt’s army of puzzle-solvers….

Within just ten days, one team of online gamers scattered across three continents solved the viral protein’s structure, crowning ten years of hard work with a ten-day victory.

And until very recently, this was the best possible option for predicting protein folding…

Enter AlphaFold

Having reached superhuman performance levels in the games of chess and Go, Google’s DeepMind recently turned its neural networks to healthcare.

In 2018, DeepMind announced a new deep learning tool called AlphaFold for predicting protein folding to aid drug discovery.

To monitor progress of the software, a biannual protein-folding competition called the ‘Critical Assessment of Structure Prediction (CASP)’ was created.

The rules are simple. Teams are all given an amino acid sequence, and the team that submits the most accurate structure prediction wins.

On its first foray into the competition, AlphaFold won hands-down against a field of 98 entrants, predicting the most accurate structure for 25 out of 43 proteins, compared to the second-place team, which was only able to predict 3 out of 43 proteins.

How fast does AlphaFold work? The program initially took a couple of weeks to predict a protein structure, but now creates predicted models in a couple of hours.

AlphaFold is only the beginning of AI’s quest to make a real impact in the realm of disease treatment and medical breakthroughs.

As Peter Diamandis has predict, DeepMind’s victory represents the second step in an evolution from the crowd to pure AI, whereby AI is now beginning to take over highly complex tasks from the interim step of the Crowd.

In the meantime…. what if we could combine the collective intelligence of the crowd with the computational power of machines?

AI and the Crowd: A Collaboration

Today, we occupy a rare moment in history where AI can facilitate and even enhance the genius of collective human intelligence, or what we might call the ‘hive mind.’

Back in 2016, Eric Schmidt suggested that the next Google will be a crowdsourcing AI company:

“[That] model, [in which] you crowdsource information in, you learn it, and then you sell it, is in my view a highly-likely candidate for the next $100 billion corporations.”

“If I was starting a company, I’d start with that premise today. How can I use this concept of scalability and get my users to teach me? If my users teach me and I can sell to them and others a service that is better than their knowledge, it’s a win for everybody.”

Complementary forces, AI and crowdsourced wisdom offer radically different benefits.

In the case of collective intelligence, humans have the major advantage of intuition.

Instead of number-crunching our way through any problem, we know when to crunch numbers or which tools to use, and can redefine complex puzzles from creative new vantage points.

Unlike pretty much all AI systems, we can also often explain the reasoning behind our decisions and choices, the logic driving our conclusions.

And when combined, the aggregate predictions of crowds tend to be extraordinarily accurate, far exceeding individual estimates.

But even aggregating the expertise and ideas of thousands of minds has its limits and inaccuracies.

Enter AI-aided swarm intelligence.

Already, MIT’s Center for Collective Intelligence is working to combine the best of collective genius with machine systems that optimize our productivity, hive mind solutions, company profits and even the methods we use to think about difficult issues.

If two brains are better than one, how could we take advantage of a hundred, a thousand, or even 8 billion?

And MIT isn’t alone.

Now, a company called Unanimous A.I. has developed swarm AI-based software solutions that connect people and their collective expertise, settling on crowdsourced answers in real-time.

With Swarm AI, Unanimous’ crowdsourced predictions — from sports wagers to Oscar betting — now outperform both top expert forecasts and pure AI-generated projections.

Given any question, Swarm AI projects several possible answers on a screen, measuring the confidence with which each player pulls a virtual bubble toward their preferred answer.

Aggregating “collective confidence” of the group, Unanimous’ algorithm then settles on an answer, outperforming all traditional voting systems.

Imagine the implications: crowdsourced medical diagnoses, financial predictions, even tech-aided democracies and moral value judgments…

Already, Swarm AI-moderated group predictions have offered a tremendous upgrade to radiological evaluations. Moderating the assessments of eight leading Stanford radiologists regarding whether 50 chest X-rays showed signs of pneumonia, Unanimous’ software yielded a group prediction 33 percent more accurate than any individual evaluation.

And some have even posited the use of ASI (Artificial Swarm Intelligence) in determining ethical judgment calls.

Final Thoughts

The use of crowdsourcing to train AI systems is one of the most overlooked, deceptively growing and MONUMENTAL industries of the next decade….

If artificial intelligence is the electricity of the 21st century, collective intelligence will soon be its most valuable fuel.

And as we continue to approach the merging of mind and machine at an ever-accelerating pace, just imagine the unprecedented new solutions we can create, together.

Please keep me in mind as your life coach, openings for senior executive engagements, and board openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way.

Email me: or Schedule a call: Cliff Locks

Contributor: Peter Diamandis

Machine Learning Confronts the Elephant in the Room – A visual prank exposes an Achilles’ heel of computer vision systems: Unlike humans, they can’t do a double take.

Posted by Cliff Locks On October 31, 2018 at 10:06 am / In: Uncategorized

Machine Learning Confronts the Elephant in the Room – A visual prank exposes an Achilles’ heel of computer vision systems: Unlike humans, they can’t do a double take.

Score one for the human brain. In a new study, computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease.

“It’s a clever and important study that reminds us that ‘deep learning’ isn’t really that deep,” said Gary Marcus, a neuroscientist at New York University who was not affiliated with the work.

The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we’ll want their visual processing to be at least as good as the human eyes they’re replacing.

It won’t be easy. The new work accentuates the sophistication of human vision — and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene — an image of an elephant. The elephant’s mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen.

“There are all sorts of weird things happening that show how brittle current object detection systems are,” said Amir Rosenfeld, a researcher at York University in Toronto and co-author of the study along with his York colleague John Tsotsos and Richard Zemel of the University of Toronto.

Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.

The Elephant in the Room

Eyes wide open, we take in staggering amounts of visual information. The human brain processes it in stride. “We open our eyes and everything happens,” said Tsotsos.

Artificial intelligence, by contrast, creates visual impressions laboriously, as if it were reading a description in Braille. It runs its algorithmic fingertips over pixels, which it shapes into increasingly complex representations. The specific type of AI system that performs this process is called a neural network. It sends an image through a series of “layers.” At each layer, the details of the image — the colors and brightnesses of individual pixels — give way to increasingly abstracted descriptions of what the image portrays. At the end of the process, the neural network produces a best-guess prediction about what it’s looking at.

“It’s all moving from one layer to the next by taking the output of the previous layer, processing it and passing it along to the next layer, like a pipeline,” said Tsotsos.

Neural networks are adept at specific visual chores. They can outperform humans in narrow tasks like sorting objects into best-fit categories — labeling dogs with their breed, for example. These successes have raised expectations that computer vision systems might soon be good enough to steer a car through crowded city streets.

They’ve also provoked researchers to probe their vulnerabilities. In recent years there have been a slew of attempts, known as “adversarial attacks,” in which researchers contrive scenes to make neural networks fail. In one experiment, computer scientists tricked a neural network into mistaking a turtle for a rifle. In another, researchers waylaid a neural network by placing an image of a psychedelically colored toaster alongside ordinary objects like a banana.

This new study has the same spirit. The three researchers fed a neural network a living room scene: A man seated on the edge of a shabby chair leans forward as he plays a video game. After chewing on this scene, a neural network correctly detected a number of objects with high confidence: a person, a couch, a television, a chair, some books.

In the unmodified image at left, the neural network correctly identifies many items in a cluttered living room scene with high probability. Add an elephant, as in the image at right, and problems arise. The chair in the lower-left corner becomes a couch, the nearby cup disappears, and the elephant gets misidentified as a chair.

Amir Rosenfeld

Then the researchers introduced something incongruous into the scene: an image of an elephant in semiprofile. The neural network started getting its pixels crossed. In some trials, the elephant led the neural network to misidentify the chair as a couch. In others, the system overlooked objects, like a row of books, that it had correctly detected in earlier trials. These errors occurred even when the elephant was far from the mistaken objects.

Snafus like those extrapolate in unsettling ways to autonomous driving. A computer can’t drive a car if it might go blind to a pedestrian just because a second earlier it passed a turkey on the side of the road.

And as for the elephant itself, the neural network was all over the place: Sometimes the system identified it correctly, sometimes it called the elephant a sheep, and sometimes it overlooked the elephant completely.

“If there is actually an elephant in the room, you as a human would likely notice it,” said Rosenfeld. “The system didn’t even detect its presence.”

Everything Connected to Everything

When human beings see something unexpected, we do a double take. It’s a common phrase with real cognitive implications — and it explains why neural networks fail when scenes get weird.

Today’s best neural networks for object detection work in a “feed forward” manner. This means that information flows through them in only one direction. They start with an input of fine-grained pixels, then move to curves, shapes, and scenes, with the network making its best guess about what it’s seeing at each step along the way. As a consequence, errant observations early in the process end up contaminating the end of the process, when the neural network pools together everything it thinks it knows in order to make a guess about what it’s looking at.

“By the top of the neural network you have everything connected to everything, so you have the potential to have every feature in every location interfering with every possible output,” said Tsotsos.

The human way is better. Imagine you’re given a very brief glimpse of an image containing a circle and a square, with one of them colored blue and the other red. Afterward you’re asked to name the color of the square. With only a single glance to go on, you’re likely to confuse the colors of the two shapes. But you’re also likely to recognize that you’re confused and to ask for another look. And, critically, when you take that second look, you know to focus your attention on just the color of the square.

“The human visual system says, ‘I don’t have right answer yet, so I have to go backwards to see where I might have made an error,’” explained Tsotsos, who has been developing a theory called selective tuning that explains this feature of visual cognition.

Most neural networks lack this ability to go backward. It’s a hard trait to engineer. One advantage of feed-forward networks is that they’re relatively straightforward to train — process an image through these six layers and get an answer. But if neural networks are to have license to do a double take, they’ll need a sophisticated understanding of when to draw on this new capacity (when to look twice) and when to plow ahead in a feed-forward way. Human brains switch between these different processes seamlessly; neural networks will need a new theoretical framework before they can do the same.

Leading researchers in the world are working on it, though, and they’re calling for backup. Earlier this month, Google AI announced a contest to crowdsource image classifiers that can see their way through adversarial attacks. The winning entry will need to unambiguously distinguish between an image of a bird and an image of a bicycle. It would be a modest first step — but also a necessary one.

Please keep me in mind as your life coach, openings for senior executive engagements, and board openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way.

Contributor: Peter Diamandis

Machines Will Do More Work Than Humans By 2025, Says The WEF

Posted by Cliff Locks On October 17, 2018 at 10:08 am / In: Uncategorized

Machines Will Do More Work Than Humans By 2025, Says The WEF

The World Economic Forum has just released its latest AI job forecast, projecting changes to the job market on a historic scale. While machines currently constitute roughly 29 percent of total hours worked in major industries — a fraction of the 71 percent accounted for by people — the WEF predicts that in just 4 years, this ratio will begin to equalize (with 42 percent total hours accounted for by AI-geared robotics). But perhaps the report’s most staggering projection is that machine learning and digital automation will eliminate 75 million jobs by 2025. However, as new industries emerge and technological access allows people to adopt never-before-heard-of professions, the WEF offers a hopeful alternative, predicting the creation of nearly 133 million new roles aided by the very technologies currently displacing many in our workforce.

Why it’s important: Already, more than 57 million workers — nearly 36 percent of the U.S. workforce — freelance. And based on today’s workforce growth rates as assessed by 2017’s Freelancing in America report, the majority of America’s workforce will freelance by 2027. Advancements in connectivity, AI and data proliferation will free traditional professionals to provide the services we do best. Doctors supplemented by AI-driven diagnostics may take more advisory roles, teachers geared with personalized learning platforms will soon be freed to serve as mentors, and barriers to entry for entrepreneurs — regardless of socioeconomic background — will dramatically decline.

Please keep me in mind as your life coach, openings for senior executive engagements, and board openings. If you hear of anything within your network that you think might be a positive fit, I’d so appreciate if you could send a heads up my way.

Contributor: Peter Diamandis