By: Mark Cecchini
I received my PhD in Decision and Information Sciences from the University of Florida in 2005. Yes, I’ve got some age on me. No, I’m not great at a leg tuck anymore. Ok fine, I probably never was. When describing my PhD journey and my deep dive into Artificial Intelligence I often show people the very thick book titled “Statistical Learning Theory” by Vladimir Vapnik (see Figure 1 below) and whine about how it ruined 2 years of my life. While that’s a bit dramatic, it’s not too far off. Prior to studying for my PhD, I was a CPA who went back to school to get my MBA. During my time in school, I found myself very excited about learning everything business. It’s a case of timing really. As an undergraduate student I was middling at best but managed to get an Accounting and Finance degree and went on to get my CPA license. There are various schools of thought about why I wasn’t a good undergraduate student, most of which are totally uninteresting, except at the dinner table with family over the holidays. My favorite one is when I told my mother that I passed all 4 parts of the CPA exam on the first go, and she asked me if I read the results upside down (true story). The moral, I wasn’t ready until I was ready. And I guess when I had some experience and went back for my MBA, I was ready. As soon as I graduated, I was ready for more. Next stop, Gainesville! I came in overconfident due to my success in the MBA program. I assumed I could learn anything! Then, I met linear algebra, Calc 3, object-oriented programming, microeconomics, and finally, Vladimir Vapnik’s stupid book.
Figure 1:
Hardcover: 768 pages
Item weight: 2.67 lbs.
Figure 1 illustrates the cruel nature of my dissertation committee.
All of this origin story is preamble to the point I’m about to make. I hated ChatGPT when it first came out. Everyone got all spun up about it and suddenly people who couldn’t even calculate a first derivative were suddenly becoming AI experts, simply because they could write a prompt (essentially a google search phrase) better than the next guy. Didn’t they know I sweated over this knowledge? I wasn’t built for inverting matrices! What gave them the right to stake such claims? I guess I wasn’t ready.
Then my son had a science project, and we decided to do a comparative exercise using several LLMs (large language models), including ChatGPT, Gemini and Co-Pilot. Once I saw them in action, I finally changed my tune. I then read a book called Co-Intelligence: Living and Working with AI, by Ethan Mollick (Figure 2). I highly recommend this book if you are the audience I was trying to bash before. That is, you have an interest in AI through the LLMs but don’t consider yourself a deep technologist. This book will speak to you about what they can do today and their promise for tomorrow.
Figure 2:
What does this have to do with the last mile problem? The last mile problem historically has been used to refer to the idea that even if a city gets access to large bandwidth cables, the residents may not reap the rewards of faster internet if the cables that run from the street to their houses carry a small bandwidth. The only way to really get that big bandwidth is to dig new lines all the way to each house. But those lines are expensive to dig (and there are a lot of them).
I am going to apply this concept to AI. Think of AI as the large bandwidth cable with infinite potential and think of the cables that go to your house as the capabilities you would need to run an AI project. Very few people were foolhardy enough to take a deep dive like I did. I KNOW how the sausage is made, and it isn’t pretty. At a minimum you need to choose which algorithm to use, optimize the parameters, choose a training/testing methodology, and code it all up in an object-oriented programming language like Python.
But what if the LLMs are not just here to help 9th graders get out of learning grammar. What if these LLMs are the gateway for normal people (not math freaks) to access the world of AI. What if all the obstacles that would keep you away from running a successful AI project just vanished because the LLM can manage it for you? It takes an interesting technology that can do some cool things and turns it into a multitool that helps you get REAL work done too. I am now convinced that in the not-too-distant future you will be able to say something like this to your favorite LLM:
Set up an AI model for me with these 10 characteristics [list characteristics] from [insert dataset] and use [insert column name] from the same dataset as the desired binary outcome. Train the data using best practices for this type of data, then test it on [insert dataset]. Put the results in a table and make it look professional.
I was at a talk recently from a bank president who said, “AI isn’t going to take our jobs away, but those who don’t know how to interact with AI will not have jobs in the future”.
Ready or not! This is a call to action for all of you. here are a couple of ideas to get started:
- Download an LLM and try it out a few times for mundane tasks.
- Read Co-Intelligence: Living and Working with AI by Ethan Mollick
- Read Statistical Learning Theory by Vladimir Vapnik (just kidding)
- Let us help you get started! Reach out to Russ Klauman at Klauman@moore.sc.edu to learn more about our AI course options.
Comments are closed.