In Orbit: A KBR Podcast
In Orbit: A KBR Podcast
Let’s Talk About Large Language Models
Large language models (LLMs) are having a moment, and it’s high time we talked about it. We were thrilled to welcome Dr. Tariq Ahmad, natural language processing expert at Frazer-Nash Consultancy, to break down the basics of LLMs, the boons and pitfalls, and some ideas on how to make better use of this game-changing technology than writing fake Bon Jovi lyrics.
IN ORBIT: A KBR PODCAST
Season 4, Episode 10
Let’s Talk About Large Language Models
INTRODUCTION
John Arnold
Hello, I'm John, and this is “In Orbit.”
Welcome to the podcast people of the world. Whether you're finding us for the first time or are a long-time listener, we're glad you're along for the ride with us today and staying in our orbit. A few episodes back, we had an absolute ball when we were joined by Dr. Ben Ochoa to talk about AI [artificial intelligence]. Well, friends, it's time to finally talk more about machine learning, specifically large language models. So not only is this going to be an awesome episode for that reason, but we're also thrilled to have with us for the first time, one of the world-class experts from KBR subsidiary, Frazer-Nash Consultancy.
If you're not familiar with Frazer-Nash, they are a leading systems engineering and technology company that uses deep domain expertise to help clients develop, enhance, and protect their most critical asset systems and processes with the ultimate aim of making life safer, more secure, more sustainable, and more affordable for everybody. If you want to learn more about the work the amazing people at Frazer-Nash are doing, you can hit pause on this episode and go check out fnc.co.uk.
TRANSITION
But in the meantime, I'm very happy to welcome Dr. Tariq Ahmad to the podcast to talk with us more about large language models. Dr. Ahmad is senior consultant with Frazer-Nash Consultancy and specializes in natural language processing. Welcome to the podcast, Tariq.
Dr. Tariq Ahmad
Hello. Thank you very much for having me. I'm really excited to be on the podcast.
John Arnold
It's an absolute pleasure. As I just mentioned in the open, we're thrilled to have someone with us from Frazer-Nash, but before we get into large language models and all that, I wonder if you'd just please tell us a little about yourself and maybe how you first got interested in software development and how that's led you to where you are today.
Dr. Tariq Ahmad
Sure. So let me take you on a little journey, if I may. One that spans decades, continents, and quite a few lines of code. So I'm like a Londoner by birth. I'm from Watford, a little town just outside London. It means I'm a firm believer that tea should be hot, weather should be complained about, it's football, not soccer, and everything's always better with a dash of humor.
And yeah, growing up I was fascinated by how things worked, taking things apart, just to see if I could put them back together again, which usually I could, thankfully. My interest in software kicked off quite early. Back then, 1980s, computers were relatively new. There was a thing called a BBC Model B. Nobody's probably ever heard of it now — 32K ram, no USB, no CDs, and we used to load of things from cassettes and tapes. That was my first computer, and I was hooked from then onwards. I remember writing my first program. I was over the moon it worked. Something really basic probably, but it felt like magic, and that's when I knew that this was a path for me.
My career kicked off at KPMG in their health care division. This was back in the day when coders had to wear suits and ties as well, but it was a really good starting point. I got to cut my teeth on some serious projects, and it wasn't long before I became team leader, started to focus more on their core products.
I then had the opportunity to come to America, would you believe, to work for Sybase. Sybase, in those days, they were competing directly with Oracle and Microsoft in the databases space. I was based in Danvers just outside of Boston. Quite an adventure for me that was, but I got to work with some clever people and really hone my skills. Then, the world went to pot and they got rid of everybody, and I moved back to the U.K. to work for iDocs. They had a company who applied software to government in the U.K. I spent 17 years there. Yeah, almost a lifetime in the same company, and I was doing the same stuff, working on websites and apps and databases and that sort of stuff.
But then after two years, I started to feel an itch. Two decades, I started to feel an itch that it's time for something new. Well, the world was changing, software development was changing, and I sort of wanted to change with it a little bit. So 2015, I got the opportunity to go back to university, pursue a PhD in artificial intelligence and machine learning and specializing in natural language processing, that's NLP, and balancing that PhD with a part-time job was quite difficult, but looking back, it was also quite rewarding, and the deeper I got into it, the more I realized that I wasn't too bad at it, can I say?
And again, I was fascinated with computers, I was fascinated by the idea of teaching machines how to understand and behave like humans. It's like magic all over again, basically. Yeah, finished my PhD, went back to full-time work, but something had changed, the passion I once had for building websites and apps, it just wasn't there anymore, I wanted to be in a place where I could use my new skills, really. That's when Frazer-Nash came along. They were looking to expand more into NLP sort of work. I was looking to use my newly acquired skills, and it felt that the stars aligned a little bit, perfect match.
Then, yeah, fast-forwarding to today, I'm part of the strategic modeling group based in the northwest of the U.K. in a place called Manchester, working on some really exciting projects. Love what I do now. Every day is an opportunity to explore the potential of NLP in large language models. And having witnessed firsthand the impact of the internet and apps and websites, I'm really passionate about what these models can achieve, so it's just almost like coming full circle, but on a bigger playing field.
Yeah, that's a briefish story of how an adopted Londoner with a love for tea and humor ended up in Manchester working on cutting edge AI, and it's been a journey. Wouldn't change a thing apart from maybe playing soccer in Boston.
John Arnold
And maybe the weather too as well, if you could.
Dr. Tariq Ahmad
And maybe the weather, that's right.
John Arnold
I love to hear about all of our experts across both Frazer-Nash and KBR more broadly, whether from tactical, hands-on actual building engineering or software development that the common thread seems to be that as children, there was that love of taking things apart and reassembling them, and then also I applaud you for when you felt that itch and knew that something should change, that you went back and got the PhD, that's fantastic.
Dr. Tariq Ahmad
Yeah, I think some things are just meant to be. I live in Manchester, I went to Manchester University for my bachelor's and my master's, and I still do, but even then, used to go back to help out with, they used to call it, Meet the Expert Sessions. And it was at one of those sessions where I mentioned to one of the organizers, "I'd love to come back to this department to study." And she said, "Well, why don't you do it?" And I thought, "No, no, no. How can I do it?" Firstly, I'm probably too old. Secondly, I've got a full-time job. How do I balance that? And thirdly, it's probably going to cost a load of money, right? But she sort of got me thinking and I went back to work and spoke to them and they said, "Yeah, fine, go part-time." Went to meet some professors and they said, "Yeah, okay, fine."
I don't think they liked me. or maybe it was I don't think they liked my ideas particularly. It was more the fact that most people who work academia are people who have finished their master's and gone straight into academia. Whereas, here was somebody who'd spent decades in the industry. I think they liked that more than they liked my ideas for PhD study, basically. And then we managed to get some funding as well, so everything sort fell into place. And yeah, it was a challenge. As I said, I was a mature student amongst a bunch of young, fresh, ready-to-go people, but amazing, amazing, amazing experience. And yeah, anybody got the chance to do it, I'd definitely recommend it.
John Arnold
Well, thanks for sharing that with us and for all our listeners, it's never too late, make it happen, if you want to pursue that higher education. So Tariq, we've discussed AI on the podcast and a little about ML, though not to great lengths, there's a lot of interest around the topic in the zeitgeist, but today we're talking more specifically about large language models, so what is an LLM and how does it work?
Dr. Tariq Ahmad
Good question. Good question. LLMs or large language models, think of them as a chatty AI companions of the tech world. You can think of them as overachievers in the classroom who have not only the answers, but make it sound really easy, so a super knowledgeable friend who's read every single book, article and blog post ever written. You can ask them anything from the capital of France to why cats choose to sit on your laptop when you are working, right? And they'll give you an answer that's mostly accurate, but actually quite impressively articulate.
So that's what an LLM does, right? And if I try to unpack it a little bit more, it's basically, it's a type of an AI model that's been trained on a huge, vast corpus of texts. And when I say vast, I mean basically the whole of the internet, right? So terabytes of texts from books and websites and blogs and all sorts of written stuff. This model, it learns the patterns and structures and nuances from the language, similar to how we humans pick up language through exposure and practice.
In terms of how it actually works, I want to keep a little bit light in terms of detail. They use something called deep learning, which itself is a subset of machine learning, which is a subset of AI. The deep learning part, we are talking about neural networks, where they have layers of artificial neurons. You can think of them as modeled on the brain, and each layer processes information at different levels of abstraction, allowing the model to build a richer understanding of the language.
Where LLMs have excelled is that previously we had other types of models, which would forget the context about what you were talking about. So if you had a long piece of text and you started talking about going to the bank at the beginning, by the time you got to the end of the paragraph and you said, "When I reached my destination." It forgot that your destination was the bank, so it lost context. Yeah, yeah, yeah. The secret sauce behind LLMs is this special architecture called a transformer, and it basically excels at processing sequences of data and it can focus on different parts at the same time, hence allowing you to understand context much better than easier models. So when you mention bat, for example, the model can work out whether you were talking about this flying animal or whether the bat you use when you're playing baseball or something, right?
John Arnold
Or cricket.
Dr. Tariq Ahmad
Well done! Well done. Or cricket, absolutely. Exactly that. That's a little bit of an explanation. The word large in large-language models comes from the fact that they have billions of parameters. It's like lots of little dials, knobs, that you've got to twist to make it learn properly. Those are all the parameters that models have. When you hear of Llama-70b, 70 billion different parameters to tweak.
John Arnold
Wow.
Dr. Tariq Ahmad
Yeah, exactly. Exactly. In essence, it's like a super smart digital assistant trained on autonomous text, generates human language by predicting the next word in a sequence based on what it's learned. It has no true understanding of the language because it's seen so many examples of the language it can start to predict what the next word is, which is why it's really good and unnervingly good at times.
And yeah, I think what makes people like me really interested and fascinated by LLMs is their versatility. They can help with lots of tasks, customer service type work, generating creative content, short stories, songs in the style of Michael Jackson, you name it. Sometimes they give results, which are slightly off the mark, but it's like that they're more useful than not useful.
Yeah, little bit of a summary. Powerful tools that are reshaping the way that we ... the landscape of AI really. They're really impressive. They're really entertaining, can I say? And occasionally they'll come up with surprising results and definitely they're changing the way that we are engaging with technology. And that's the really exciting part of the journey for me, really.
John Arnold
It's funny that you mentioned that the models can be humorous. I have a group thread, a group chat with some copywriter friends from bygone agency days, and I came across a particularly dull and banal piece of marketing content yesterday and sent a text to my friends. And my friend sent a text back with a screenshot that he had plugged in what I had sent in the thread into ChatGPT, to ask it the question that I had posited in the text thread. And ChatGPT sent back a pretty snarky answer to my question. You could tell that there was some sarcasm there.
And then my other friend said, "Now put the answer in the style of Bon Jovi lyrics." And it did! It put it in a two verse with a chorus and a bridge song lyric structure, and it was really, really spot on. Scarily so.
Dr. Tariq Ahmad
Yeah, it's not surprising that it can do that sort of stuff because it's seen, as I said, lots and lots of examples of this thing. But the question that a lot of us are grappling with is actually how can we make this tech really useful? And it's not just us at Frazer-Nash, it's actually, I think wide industry out there as well, grappling with. So we've got this thing that can do loads of things, and a lot of them haven't even explored the potential fully yet, but how can we actually make this useful to make our lives, our daily lives, our jobs easier? And again, like I said, that's the interesting part.
John Arnold
Well, that segues perfectly into this next question because we have heard on the news and right here in this podcast about things like ChatGPT and some of the less savory uses like plagiarism or having it spout out faux Bon Jovi lyrics. So what are some of the beneficial applications and use cases for LLMs? Where's the real value lie?
Dr. Tariq Ahmad
Yeah, good question. I think I would have to start with content generation. That's the obvious one, right? Whether you need articles or email drafts or even songs and poetry, LLMs can generate this starting position or starting drafts, can I say, in seconds. They can help brainstorm with ideas, create drafts, or even just full pieces of content. I think that's the obvious one. And then I guess some industries, I think it's going to make some waves in. The obvious one in my head is education. It's no surprise that when, I think it was Google, were demonstrating Gemini — I think I might have that wrong — they showed it as they demonstrated its potential in the classroom. It was acting as a real time tutor.
John Arnold
Interesting.
Dr. Tariq Ahmad
Providing an explanation, then answering questions in real time. And I also talk at my secondary school as part of their yearly get together. Teachers as well have embraced it in terms of getting students to help them. For example, algebra. Getting students to use ChatGPT to explain algebra in a really simplistic basic way. I see education as a real big place where this can definitely have an impact. It's like having a buddy who knows the answers to everything, right?
John Arnold
Right.
Dr. Tariq Ahmad
Translation services. We in Frazer-Nash used LLMs to translate a bunch of documents from Portuguese to English a couple of months back. We then had them assessed by some Portuguese speakers, and we were told that, "Hang on a minute, this has done a really good job."
John Arnold
Excellent.
Dr. Tariq Ahmad
And then ourselves, within Frazer-Nash, we are also using LLMs to help with research analysis, literature reviews. LLMs can clearly sift through huge amounts of data and extract relevant insights. But also the interesting bit is that it can actually be summarize for you. So if you've got thesis ... who's got time to read a full thesis, for example? Then you can get an LLM to give you a summarized version in digestible format. And it can help scientists, analysts, businesses, make sense of their data faster, and then hence allow for better decision making.
Yeah. I think also, that's just me with some ideas off the top of my head. Again, I have to stress that there's probably so many places where we haven't even explored the possibilities of what LLMs can do just yet. But also I think it's not just about what can LLMs do, it's also about the time and the resources they save. We can use LLMs to automate repetitive tasks and then allow people to focus on more interesting, creative, valuable ideas. Instead of spending hours doing mundane stuff, you can leverage LLMs to enhance your productivity and boost creativity, drive innovation, and do more interesting stuff.
Like any tool, LLMs have their quirks. I said they produce results that are off the mark sometimes. But they are a treasure trove of like a Swiss knife that will transform the way we work, learn, and create. So wherever industry or field I think one is in, LLMs are here to help and they're here to stay.
John Arnold
You alluded to this a moment ago. What are the limitations of LLMs? What challenges do you and other experts that you work with, or even the lay person, what are the challenges that we're experiencing with LLMs, those quirks as you called them?
Dr. Tariq Ahmad
Yeah. There's a couple obvious ones. I have to say hallucinations or in the LLM context, hallucinations are where the model generates information that is fabricated or nonsensical. As I said, the LLMs are trained on, you could say, the whole of the internet, but they don't actually know where the boundaries of their knowledge are. Hence they generate things which are, as I said, basically wrong sometimes. And they might sound plausible, but you can imagine that these sort of hallucinations are problematic in scenarios where you've got, I don't know, medical domain or the legal domain. Accurate information is vital. So if you've got something which is a hallucination, it's going to be really problematic. That's the obvious one, I think, hallucinations.
I think also, I think I might have alluded to it before, their lack of true understanding. Yes, they can predict the next word in a sentence really well, but they don't actually comprehend the meaning behind those words particularly well. That's why you get answers, which sometimes you're stretching your head.
Bias. LLMs are trained on data source on the net. They can inadvertently pick up and replicate the biases that are in that data. Consequently, the output from the LLM will also reflect that bias. And then that could lead to problematic or just plain unfair results. And I don't know whether the big players, the people who make these models are that open on the data they're using and on what they're doing to it. So you have bias.
And then probably also I'd say computational resources. You need resources to run these things. To run these things, you've heard the story recently about the news recently about OpenAI running out of funding because it takes them a lot of money to keep their servers running. But also training these models. Yes, theoretically you or I could create our own LLMs from scratch, but in practice you need a lot of resource, power, money, et cetera, et cetera. High energy consumption. It's all about net-zero and that sort of stuff these days. Environmental issue essentially. And I've read somewhere that LLM carbon footprints rival that of small country too.
John Arnold
Wow.
Dr. Tariq Ahmad
Yeah, really useful tool, but we need to be a little bit mindful of the amount of resources they consume. And then I guess the final one I might throw in would be the challenge of integrating LLMs into projects, products, pipelines, existing workflows. It can be tricky sometimes trying to make sure that the LLM works in tune or in harmony with other systems and other processes. It can be complicated sometimes. But definitely I think they bring value. As a conclusion, I'd say that they are really amazing tools. They can revolutionize how we work and create and problem-solve, but they do have their limitations.
John Arnold
Absolutely. It's interesting, I think, that one of the take-home statements that you made is they don't know where the boundaries of their knowledge are. That's the same for human beings. And then sharing that bias online, you get in trouble when you're in a Facebook wormhole. You mentioned translation services as one use case for you and the folks at Frazer-Nash Consultancy. How are you and your colleagues using LLMs aside from the translation service?
Dr. Tariq Ahmad
We're taking a broad approach really. We're trying to explore the full spectrum of what LLMs can offer. A part of that have to say involves making sure that we are continuously learning in this evolving field and upskilling. So obviously, as LLMs evolve, not a month goes by without some new LLM coming up and saying it's the best and all the rest of it. So we need to make sure that we're on top of those things and our skills as well in terms of not just how to create the most effective prompt, but also how do we deploy these and integrate them. So we're constantly trying to expand our expertise, both in both the theory and the practical aspects. So yeah, try to understand the models themselves. Also, mastering the infrastructure. A lot of the work that we at Frazer-Nash do is highly secure, so we can't just automatically say, "Okay, we're going to use a cloud-based service or an API like ChatGPT's API.| We have to think also about on-prem deployments and then think about scaling them so they effectively meet the requirements of the clients.
In terms of specifics, one of the projects we worked on is by using LLMs to help develop horizon scanning platforms. We use these platforms to sift through vast amounts of information, trying to identify trends and insights that might impact the clients' business. So this will help them stay ahead a little bit, but also provides them with valid intelligence on how to inform their strategies. So as part of that project, we use LLMs to basically try to figure out from a vast amount of information what the relevant bits to look at might be. And that was a really interesting part of project, because at that time, LLMs had just come into the public attention. So we were evaluating LLMs against existing methods, found that LLMs basically outperformed all the existing state-of-the-art methods. We weren't particularly surprised by it.
John Arnold
Right. Right.
Dr. Tariq Ahmad
We've also used LLMs to help clients with their coding needs, would you believe? So how to best make use of LLMs to generate code or help them with debugging or providing suggestions. And there's lots of models out there that are invaluable in the development process. It's like having a coding assistant that can help you brainstorm.
John Arnold
Wow.
Dr. Tariq Ahmad
Yeah. So as we work on these projects, we're also engaged in lots of internal initiatives, can I say? I have to admit we're a little bit behind the curve compared to some other organizations, but this gives us a bit of a unique opportunity to learn from them and their challenges, and we're catching up. We're doing a lot of outreach within the business, actively informing people about what LLMs are, what they can do, and encouraging them to think about where these models can be used and applied, and asking various stakeholders to share their insights, share success stories, and highlight the impact of LLMs across their industries. We try to just inspire people to be creative, innovative, and have a view and an understanding of how this tech could be used for their own needs.
We also have some internal tools that we've built. Really early days on some of these tools, but we have a tool which at the moment it helps us to find people with specific skillsets. And then we want to be able to use an LLM to say, "Okay, I found this person's CV. Can you give me a summarization of the CV, 500 words, for example, tailored to a specific aspect of the CV?"
John Arnold
Interesting.
Dr. Tariq Ahmad
And then we can use that in the bid and that will save us time, energy, effort, money, et cetera. So yeah, our approach is both broad and deep. We're investing in research to push the technology forward. At the same time, learning how to deploy the model effectively and responsibly. At the same time, keeping an eye on what our colleagues at KBR are doing with KBRAIN, trying to leverage that, make use of that. And that might help us with some shortcuts as well. Developing internal tools. Okay, LLMs have been around for a couple of years now, but it is still early days for a lot of our clients. We're trying to understand what they can do for them. And at the moment, it's all about education.
John Arnold
Right, yeah. I think that a common thing you see now is lots of companies rushing to say, "We do everything with AI and ML," and clients saying, "I want AI and ML," and not having a lot of deep understanding of what that necessarily means or how it can even help, so it's interesting to hear about these practical uses for it. And for listeners who aren't familiar with it, KBRAIN, the KBR AI Network, and that's an internal LLM that we're working on developing that will also have use cases with our client base. You've discussed this a little bit already, but I want to dive into it a little more deeply. And that is what the role of the human being is in the large language model. I know data integrity is a huge aspect of it, making sure that the models are being fed the right data. So what's the role of professionals like you, of human beings like you with LLM, and how do you see that changing over time?
Dr. Tariq Ahmad
Good question. Good question. I feel that the relationship between humans and LLMs is a little bit like that of a conductor and an orchestra. The instruments, the LLMs, can produce beautiful music, but it's the human touch that brings the direction, the harmony, and brings it all together. I think the current role of humans, it's quite well-defined, I think. So training and fine-tuning, for example. Humans are responsible for the data used to train LLMs, so selecting high quality, diverse data sets to make sure that the models learn from a wide range of perspectives to reduce bias. I think that's a really important role for us at the moment.
I think ethics is a big thing. I maybe haven't mentioned it enough. Humans have a crucial role in making sure that we identify and mitigate biases, and also establish proper guidelines for responsible use. And that's a big thing at the moment. I always say in my NLP LLM presentations that while LLMs can do a good job at generating text and all that sort of stuff, I think, and this might be controversial, but I think humans will always be needed to interpret that output in a critical way, assessing whether the information is accurate and relevant. And again, this is especially important in domains such as medicine, healthcare, law, where one wrong word could lead to big consequences, significant consequences.
John Arnold
Right, yeah.
Dr. Tariq Ahmad
So I feel that that role there for the foreseeable, if not forever, will always be there. And yeah, we are the ones who are interacting with LLMs, guiding the usage, determining their applications. So our feedback, you might have seen at the bottom of a ChatGPT, when you get answer a response back from it, you get the thumbs up and thumbs down. That's our feedback. And they're going to use that to make those models better. And again, I feel as if that is a crucial part of what we're doing at the moment.
John Arnold
Interesting. Yeah.
Dr. Tariq Ahmad
And then in terms of, you asked about how that might evolve over time. It's interesting. I think we're likely to see more collaboration between humans and LLMs, with LLMs becoming more clever, can I say, understanding what our intent might be. We can fashion prompts, et cetera, et cetera. But I think there's a little piece missing there where if they know what my preference is, what my style is, maybe they can give better answers potentially. I think the demand for AI education, literacy, is going to increase. So people will need to understand not only how to use LLMS, but also how they work, and understand the limitations better and the pitfalls.
I think also, I have to go back to creativity. I think as LLMs improve, they'll become better and more powerful tools for creativity. Instead of replacing human creativity, they might work with us and enhance human creativity, so artists and writers and musicians might collaborate with LLMs to push the boundaries of what's possible. And [inaudible 00:31:51] go back again to feedback loops. I think as LLMs generate outputs, humans will continue to provide the feedback that will inform further development of LLMs. And this is iterative process which enhances the model's performance over time. So yeah, those are some of the things I feel like we humans will play a part in going forwards. Yeah, my strong belief is that the human element is irreplaceable, can I say? We bring context, ethics, critical thinking, evaluation to the table to make sure that these models serve us in an effective way. And as we move forward, I think our roles will evolve, focusing on collaboration, ethics, and continuous learning.
John Arnold
It's so fascinating to hear you talk about that. That was something that Dr. Ben Ochoa mentioned in the AI episode that we did a few episodes back as well, that the human will always be needed to interpret that quality, relevancy, integrity of the data. So that at least is comforting to hear. So we've talked about the role of the human being. Let's look into the future now. We're 10, 20 years down the road, maybe even not so far down the road, but what are some of the trends? You've mentioned creativity, you've mentioned some of the other possible breakthroughs that LLMs can help us to make. What's a use case that you would like to see LLMs really, as the technology becomes more sophisticated?
Dr. Tariq Ahmad
I think the focus is going to be on general AI models. So at the moment, we have models specifically for specific things. For example, you've got a model called Codex, which is developed by OpenAI, which they use specifically for helping coding. But there's talk of a general model that knows everything. I can't see it. If Quantum comes along, that might be a thing. Not sure. But certainly there's more and more shift now from large language models to specialized language models. So from LLMs to SLMs, and I mentioned Codex, and there are other models that are developed to work on specific task-orientated models.
I think there's more focus now also on multimodal capabilities. So when ChatGPT first came to the public attention, it was mostly to do with text. Now you hear it talking. Now you hear [see] it looking at images, digesting images. And there are other models like I think Claude Sonnet where they focus on images. So texts, images, audio, and maybe other types of sensory input. I think it's going to be more focused on those. I also think multilingual aspects. LLMs, I think, will expand their language support to include other languages and dialects to make them much more accessible and inclusive.
I think edge computing integration might also change the game a bit, making them more accessible by optimizing them for deployment on edge devices, like mobile phones, for example, so you're not relying on cloud-based solutions anymore. That will hopefully drive more privacy and efficiency. You mentioned ethics and bias. I hope we can expect to see some more advancements in the techniques to mitigate these things. So LLMs of the future might have more sophisticated algorithms to detect and reduce bias in their outputs, making content more fair and better.
And I also hope, I don't know, maybe I'm just wishing, but I also hope that we can improve how we train LLMs. At the moment, it takes a lot of time and there are efforts around things like few-shot and zero-shot learning where you can make a model to learn from very few examples. Hopefully there's improvements to come along in that realm. And then I guess finally maybe improvements in robustness and security, making them more robust to adversarial attacks and misinformation and basically protecting them better against malicious use of the tech.
John Arnold
Fascinating. That is so fascinating. Well, Dr. Ahmad, is there anything you'd like to add before I let you go today?
Dr. Tariq Ahmad
Well, first of all, thank you very much for having me. It's been a blast talking about NLP and large language models. I guess if there's one thing I'd add, we are really just scratching the surface of what's possible with this technology. LLMs have changed many aspects of our lives already, but I think there's much, much more to come. But it's crucial that we continue to approach this tech with care and as much as we can make sure we prioritize ethics, transparency, and fairness. And I guess on a lighter note, I fully support AI taking on the heavy lifting for certain tasks.
But also there's no way it's going to replace the simple joy of having a friendly chat with somebody, right? So good tech, but it shouldn't just take over our lives and stop us. One of the dangers maybe I should have mentioned also is that I personally feel that LLMs are making us all less clever, can I say. I was at school, I'd have to write an essay on Roman history, you have to go and research it and do it, whereas now you just press some button, then you've got the answer. So you're missing that little bit of getting into the books and do some research.
So a balance has to be achieved. So yeah, I mean final words, I guess, if this topic has piqued your interest or if anybody wants to chat more about AI, LLMs, please do feel free to reach out to me. I'd love to talk about NNLP and LLMs. And again, yeah, thank you for having me on. It's been fantastic. Thank you.
John Arnold
Yeah. If anyone's interested in speaking with Dr. Ahmad about this subject, you can find him on Frazer-Nash Consultancy's very nice website. That is fnc.co.uk. So feel free to reach out to him with those questions. Dr. Ahmad, it's been a pleasure having you on. Thank you so much for your very valuable time, I know. And I hope you enjoy the rest of your day. Hope the weather there hangs up for you and that you have a nice weekend.
Dr. Tariq Ahmad
And yeah, the weather's fine at the moment. Thank you very much. We're approaching winter, so this could change anytime now.
John Arnold
That's right.
Dr. Tariq Ahmad
Thank you very much.
John Arnold
Well, thank you again.
CONCLUSION
John Arnold
Well, what a fantastic foray into the world of large language models and featuring Frazer-Nash's own Dr. Tariq Ahmad. Many thanks to Dr. Ahmad for his time and generosity talking to us about this world-changing technology. I also want to send out thanks to my colleague, Rhiannon Ho over at Frazer-Nash for her help in making this episode happen. As always, a big shout out of appreciation to our producer Emma for the work she does getting these episodes out to you. Again, if you're interested in learning more about Dr. Ahmad's work and the other exciting things going on at Frazer-Nash consultancy, please visit fnc.co.uk.
Likewise to learn more about how KBR is doing work that matters to the rest of the world you can check out kbr.com. And lastly, I want to take a moment to say officially that a few months back, this little old podcast was given top honors in the podcast category at the PRNews Digital Awards. It takes a village, so congratulations to our whole “In Orbit” crew, to KBR Global Marketing and Communications on that recognition. And those congratulations also extend to you, our listeners. Because if you weren't listening, we wouldn't be doing this.
So thank you for your 21,000-plus downloads of our episodes. We know there's a lot happening in the world, there's a lot going on and fighting for your attention. Just know that we appreciate you checking in with us and for keeping us in your orbit. Take care.