By Pejman Makhfi https://goo.gl/4kUepE
Image Credit: Bas Nastassia / Shutterstock.com
Three critical trends are driving forward the rapid development of a new generation of financial robo-advice models. In fact, gains in these three areas create the potential for a Moore’s Law pattern of acceleration in the field of artificial intelligence-powered financial advice models.
1. Availability of data
We generate more than 2.5 billion GB of data every day. Innovators and developers access the vast lakes of this data from a seemingly endless range of sources. For example, states and government organizations now open up their data, including robust offerings from the Internal Revenue Service and the U.S. Census Bureau. There is a similar trend among educational, associations, and nonprofit organizations.
As a result, we see rapid development of new data-driven business models. New advisory offerings that empower consumers are the most exciting of these. They give consumers decision-making tools and deliver more targeted and relevant product offerings.
Established companies also work to harness data by making their customers’ information digitally available to them. In particular, financial institutions create open application programming interfaces that underpin new data-driven and frictionless user experiences.
In one notable example, JP Morgan and Intuit earlier this year announced their companies will make data available via the Open Financial Exchange API. Their goal is to make it easier and more secure for consumers to use their data across various financial apps and websites.
There is a significant opportunity to put this data to work in robo-advice models. As a result, the next generation of robo-advisors will expand their capabilities well beyond investment portfolio management.
2. Increased power and storage
Next, rapid gains in processing power and storage at much lower costs have created the potential to allow a new generation of robo-advice models to develop.
Announcements of advances are coming quickly, especially over the past two years as cloud leaders such as Amazon and Google unveil new breakthroughs, and hardware companies such as Nvidia and Huawei optimize products to enable more powerful artificial intelligence computing. A few key examples:
Google Tensor Processing Unit. The team at Google announced its TPU chip in May 2016. Since then, the company continues to develop it, sharing performance studies on its ability to run neural networks at scale at an affordable cost. At the time of the original announcement, Google said it found the TPUs to significantly boost performance for machine learning “roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law).”
Nvidia Volta. In May 2017, Nvidia introduced its Tesla V100 accelerator, featuring the 21 billion-transistor Volta GV100 GPU, which it called “the highest performing parallel computing processor in the world today.”
Huawei Kirin. In September 2017, Huawei introduced its powerful Kirin 970 chipset for mobile devices, which comes with a dedicated neural processing unit. The chipset promises 25 times the performance of and 50 times greater efficiency than quad-core Cortex-A73 CPU clusters, according to the company. Huawei described this as just the first in a series of advances that will enable AI capabilities on mobile devices.
These advancements will provide the power and speed needed to empower robo-advisors with capabilities like advanced data simulations, natural language conversations, and augmented reality, to name a few.
3. Advancements in AI
Maturity in algorithm and modeling techniques is the third essential area for powering robo-advice model acceleration.
Important theoretical progress in machine-based cognitive learning increasingly emerges from university researchers or is open-sourced by larger enterprises. These advances are exciting, especially because they confirm the field’s potential after the AI winter of the 1970s, when expected commercialization of AI disappointed.
In particular, deep learning and boosting models enable significant leaps forward in the application of machine learning. These include design concepts such as Google’s Capsule Network, which offers an alternative to traditional neural nets, and replicative and transfer learning, which enable pattern discoveries and accuracies impossible by human counterparts.
Graph-based and ontology-based learnings are an important part of this mix. They help significantly improve the semantic understanding of data and its translation into actionable insights. Plus, mechanisms like gated recurrent units (GRUs) become part of the structure for helping AI cognitive models retain and reuse information learned previously.
The results of studies using these ideas are impressive. In one example, University of Mannheim researchers showed how ontologies help some machine learning models validate data 50 times faster. And Google’s AutoAI demonstrated it can create better machine learning code than the researchers who made it.
Combining and leveraging such algorithmic advancements helps us drive exponential progress in AI.