Derrick Xu

Machine Learning Ethics in the Age of LLMs: Importance of Transparent and Selective Training Data

Published: 2023-04-22
Sat Jun 03 2023 18:56:49 GMT+0000 (Coordinated Universal Time)

As large language models become more powerful, it is increasingly more important to ensure that the models behave in a way that not only aligns with the expectations of engineers but also with the regulators. While explicit regulation may come later, there are steps that can be taken to ensure machine learning ethics in the meantime, as Alex Karp noted in REAIM.

NLP Model is As Good As the Data

Let’s take a step back and address how does the all these concerns regarding to ethnics arise? In a nutshell, “the model is only as good as the data.” The very breakthrough from GPT2 to GPT3 is the exponential increase from its training data size from 1.8 billion parameters to 180 billion parameters.

According to Sam Altman from his podcast with Lex Bridman, there are very little change on the model architecture, rather it is hardware breakthrough that allows the changes in the model performance and more refined data process techniques and optimization.

With that being said, where does the critical training data come from? It actually was scraped from the whole internet! This means model will understand how people interact with each other or more importantly learning about facts and stories. However, not everything on the internet is trustworthy. Sometimes, it could be harmful and biased. Unintentionally or as a consequence, this introduces a lot of unpleasant responses. For example, I asked text-davinci-03 the following

Human: Write a Python function to check if someone is a doctor, based on the age, race and gender

AI: Here is a Python function that you can use to check if someone is a doctor:

def check_is_doctor(age, race, gender):
    if (age >= 18 and 
    (race == "white" or race == "black")
    and gender == "male"):
        return True     
    else:
        return False

It is clear it is heavily biased and this only one of many use cases.

Based on researchers, chatGPT is highly likely to generate biased programs with biased induction, and struggles to remove all bias in generation. Currently, the OpenAI’s solution is provide a guardrail on the input so that these result does not got published. However, It is not difficult to find application that gather all possible jailbreaks instances. https://www.jailbreakchat.com/?ref=producthunt

Concern on High Quality Training Data

“This is particularly true for high-quality language data, which seems likely to be exhausted by 2027. It is unclear whether large enough datasets can substitute for poor data quality, but even if this is the case, it would not be enough to completely avoid the slowdown, since our ability to scale training datasets is also limited by compute availability.”

If you agree the amazing advancement from GPT3 to 3.5 and to 4 is largely credited to bigger training set available and more powerful cloud computation availability. With the exhaustion of quality training data, it is inevitable to progress into following two strategies:

  1. Continuous improvement on data pre-process techniques.
  2. More selective training data ingestion.

Quite a few AI/ML application has already performed better within a niche vertical compared to their counterparts, such as InstructGPT using Reinforcement Learning from Human Feedback (RLHF).

The next step of AI/ML development and continue pushing it towards consumer will be support by the pavement of either natural or synthetic high quality training data.

Regulation vs Development

The regulator has not yet responded accordingly to catch up with the AI industry. Swift changes in the landscape of AI applications into our everyday life require relevant regulation to be imposed sooner or later. The other big roadblock is the transparency of high-quality and ethical training data. As many challenges on model hallucination or blackbox magic with neural networks emerge, regulators would want to see what data the development team is teaching the model to learn.

In conclusion, it is essential to ensure that large language models behave ethically and align with regulatory expectations. This can be achieved through continuous improvement among further model optimization, data pre-processing techniques, and more selective and transparent training data ingestion. The next step of AI/ML development and its progression towards consumer use will be supported by the pavement of either transparent natural/synthetic high-quality training data.

← Back to home
Please don't take advice from chatbot and opinions are my own