Getting 'Graphy' with ML: Graph Neural Networks
Keep your eyes peeled on the applications of GNNs as things look very promising.
Today weβre delving into Graph Neural Networks (GNNs), a transformative concept that promises to reshape how we perceive data relationships and innovation, with our dear friend Ramon Vigdor π from Acrobator Ventures.
Ramon's insights offer a new perspective on GNNs, a concept that's more than just nodes on a graph; it's about the connections that breathe life into data points. From recommendation systems to drug discovery, GNNs are revolutionising industries that thrive on interconnectedness πΈοΈ
Today, Iβm highlighting an advance in the Machine-Learning world, and this time, one thatβs not yet on everyone's lips, but will be, and very soon indeed - take my word for it. Iβll start with the basic theory and applications in the field of Graph Neural Networks. But first things first.
Unlocking the graph: data points go beyond plain vanilla
On the right we can see two examples of very familiar data structures: separate data points having some fixed features. For example, on the bottom right we see six such points for representing the Projected and Actual Cash Flow of Monthly Income and Expense.
Are these data points related in any way? Well - they obviously are. Presumably, these points originate from the same company, and cover the same time period. A machine learning algorithm, when presented with lots of such data points, may be able to learn some of the correlations between the features. In this case we might be able to predict better future projections from given actuals.
But the relationship between the points is hidden from the algorithm. All the ML model receives is a list of data points, each with values for their features.
Now take a look at the bottom left. Here we have three data points: Man A, Woman B, and a car. Obviously these points would have different structures and features. For example, the man would probably have an Age, and Health Status, the car would have a Brand and a Value, etc. But the essential difference with the previous example is that here, the relationships among all data points are represented explicitly.
The man A is Married to B, and also Lives with her. She owns the car, while he Drives it. Since these are explicit relationships, ML models can now learn a lot more about the data. And when we expand this to millions of data points with thousands of possible relationships in any direction from each node we get a lot of new explicit information indeed.
But we cannot use just any vanilla neural network models for that. We need Graph Neural Networks.
GNNs: Graph Neural Networks
As a result of intensive research during recent years, the models of neural networks specifically designed for learning from graph data have been steadily gaining popularity. The first applications have proven their advantages in fields where explicit relationships make up the critical mass of the data itself. Such is the case with Recommendation Systems (Uber Eats, Pinterest), Traffic Predictions (Google Maps), Social networks and Drug Discovery (MIT Lab).
So whatβs in it for us?
From our point of view, it is clear that the potential of GNNs has not yet been fully applied to a vast list of use cases. Startups are only now popping up that are solving problems with the use of GNNs.
Take for example the case of programming languages. The relationships between all the entities in a source code may easily be represented by a graph. And since these relationships hold so much more information than the mere listing of source code text, it makes sense that better code debuggers, analyzers and generators can be built on top of the graph data using GNNs.
Enjoyed these insights? These are the types of insights we uncover from being an LP, as investors donβt usually line up at our doors to share their insights. By marrying our investing, talking and writing, hopefully these learnings will help you on your journey in venture π‘ Want to explore more?