top of page
mulangonando

Large Language Models (LLMs) and Trust - My Takeaway from NeurIPS 2023


Wordcloud Generated from Abstracts of all the 3586 Accepted Papers at NeurIPS 2023
Wordcloud Generated from Abstracts of all the 3586 Accepted Papers at NeurIPS 2023

Yes we are in the loop and it will not go away soon. The world just found out that LLMs are precious stones, and they are picking it up, cleaning it, refining it and shaping medals, jewellery, and other artefacts - talk of your ChatGPTs, LLAMA, GEMINI, and a host of other applications in multimodal FMs, climate FMs, and FMs for drug discovery (healthcare and life sciences). In the background, the community that explored the earth and discovered its existence, first mined it and surfaced it, is looking for where the next mining point lies, what are the new ways to mine it and how quickly can it be bridged from crude mining state to ultimate use, as indicated by the presence LLMs showed in arguably the largest Computer Science conference (NeurIPS 2023). Well for starters, three (3) of the 4 outstanding main track paper awards went to work based on LLMs (namely: Are Emerging abilities of Large Language Models a Mirage?, Scaling Data-Constrained Language Models, and Direct Preference Optimization: Your Language Model is Secretly a Reward Model ), that’s 75% of the choice papers made by the elite reviewers in any genius level subject in the world, be it physics, biology, mathematics or whatever subject you view as rocket science in your mind, making 50% of all the total award to outstanding papers. During the conference, I lightly joked with a colleague of mine saying: “Human man beings at best are very basic, all they need is good food, some theatrics to make them laugh and jump around, and a bit of political lies. Very few truly attempt to go deeper and unearth the underlying mechanics of how things work, and that could be about 1% of humanity”. and he responded: “… and that one percent is present in this conference.” Well it cannot be that all of the 1% (the percentage is not empirical, more of a choice to indicate how small the number is thought to be) maybe not all have been present at the premier conference, but all the over 10,000 participants in the conference well fit within this percentage. Therefore if I say a team selected some 4 papers to be recognised as most outstanding papers, I am talking about 1% of the 1%. The other 50% of said outstanding papers included a paper on privacy for the main track (Privacy Auditing with One (1) Training Run); and the 2 in other categories fell on trust and physics inspired ML . The next set of awards went to what the organisers viewed as an integral part in the advancement of AI - the outstanding papers in the data and Benchmarks category which saw a two way split between trust (DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models) and physics inspired ML for climate research (ClimSim: A large multi-scale dataset for hybrid physics-ML climate emulation. The climate dataset introduced in this recognition comprises multi-scale climate simulations, developed by a consortium of climate scientists and ML researchers. It consists of 5.7 billion pairs of multivariate input and output vectors that isolate the influence of locally-nested, high-resolution, high-fidelity physics on a host climate simulator’s macro-scale physical state). Notice that, there is already much work on using physic inspired modelling in foundation models. The most fascinating of all these awards is the-so-called test of time award, a recognition the conference gives to a paper they published from its iteration 10 years prior. This year’s test of time award went to a paper on “Distributed Representations of Words and Phrases and their Compositionality” a paper that conceptualised text distributional semantics into vector space representation of words in a model named Word2Vec (remember him?), in general this is the first paper that introduced an approach to capture the distributional semantics of words and tokens into computational vector spaces, allowing a trajectory that has matured into what is now known as LLMs. Well what NeurIPS is telling you is that this year they saw it fit to remind us where LLMs emerged from. So out of the 6 awards; 4 are based on LLMS, and 2 on trust. This is a clear indicator of what the community thinks will dominate the next decade or half thereof. It’s no mean fete to sift through the 3,582 accepted papers and score them to select the outstanding 6 outstanding ones.


In the next post --> I will discuss a serialisation of the LLM-based papers at the conference. Be on the lookout.

19 views0 comments

Komentarji


bottom of page