Generative AI

LLM fine-tuning is like training an intern

Raj’s Casual Conversation with His Accountant

Recently, I met with my accountant, who shared how he’s been using ChatGPT for almost everything these days. He seemed excited and fascinated by the possibilities ChatGPT offers beyond just chat functionality.

Me: “ChatGPT is an application that uses an AI model as a brain.”

Accountant: “Yes, I think I know – the model is called LLM?”

Me: “You are smart 🙂 well, you could train the model to handle some of your accounting tasks.”

Accountant: “I am intrigued, enlighten me please.”

Me: “Imagine you hired an Intern in your company.”

This intern has a foundational understanding of accounting from a four-year college education – they grasp accounting principles, know how to use some accounting software, and understand general concepts. However, to truly excel in your company, they need additional training tailored to your specific systems and requirements.

This, in essence, is what fine-tuning does for generative AI models like those powering applications such as ChatGPT. These models, known as Large Language Models (LLMs), come pre-trained with vast general knowledge, but they lack awareness of your company’s unique tasks and needs.

In the context of AI, fine-tuning is the process of taking a pre-trained LLM (i.e., our intern) and providing additional training, essentially “teaching” it to become more specialized and effective in a specific area or task.

Me: “What Do You Expect the Intern to Do?”

Accountant: “All depends on where the company needs help.”

Me: “So the scope of training will depend on the work that you expect the intern to do?”

The scope of training depends on how you plan to utilize the intern in your company.

You may train the Intern on a Broad Range of Tasks

This type of training involves familiarizing the intern with the company’s overall accounting processes, terminology across different departments, applicable tax laws, and standard operating procedures. In the context of AI models, this is known as domain adaptation fine-tuning. Returning to the intern analogy, after this kind of training, the intern will “speak” your company’s language but may still lack proficiency in specific tasks.

OR you may train the Intern on a Specific Task (or Set of Tasks)

Alternatively, you can train the intern on a narrower set of tasks, focusing on making them proficient in these particular areas. In AI terms, this is known as instruction fine-tuning. Here, the training is highly targeted, honing the intern’s skills in precisely the tasks you need done.

Accountant: “Looks like this is my lightbulb moment, wow – looks like my job is at risk.”

Me: “I can’t say that for sure, but let me just summarize what I explained.”

The goal of training or fine-tuning is to build upon an AI model’s general knowledge and tailor it to perform optimally within a specific context, whether that’s a broad domain or a very specific task. It’s about taking a generally knowledgeable AI model and making it truly excel where you need it most.

Accountant: “Now I am concerned, do you think AI will take over the job of accountants?”

Me: “We’ll discuss it over a beer someday. I need to head home and work on my career de-risking strategy as I think my job is at risk.”

Jokes apart, I hope this helped you learn something new.


PS:

I was inspired by this to a point that I did a YouTube video using this content as the basis. Here is the link if you are interested.

Full disclosure: I am the author of the course “Generative AI application design & development“.

#AIinAccounting #GenerativeAI #ChatGPT #LLM #AITraining #FutureOfWork #TechInAccounting #AIandJobs #FineTuning #AIIntern #DigitalTransformation #GenerativeAIDevelopment #LearnWithAI #AIforBusiness

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *