What did I notice, GPT & all?

People are always searching for something valuable and relatable, something that can be easily understood and applied. This concept came to me in a dream when I was 24, where I saw a donut shop. But this wasn't just any donut shop, it was different, unique, and it sparked an idea in me.

In today's world, we need tools that are not just useful, but also intuitive and adaptable. For instance, when I first explored Docker on my Windows10 laptop, I saw its potential as a teaching tool. Similarly, when I delved into ChatGPT by OpenAI, I realized its potential to revolutionize education by mimicking human interaction and generating functional code.

However, like any tool, it's not without its challenges. There are concerns about the system making things up or not delivering the expected results. The question that arises is, "What am I inputting that is causing these unexpected results?" This is a common issue that many of us have faced.

During a DevOps day, when I introduced GPT, the response was overwhelming. People started discussing its potential, questioning how it could be used to enhance our knowledge and test our solutions. The idea was to use GPT in conjunction with plugins for tools like Kubernetes and Docker, essentially creating a Flask app with the correct REST API layout and yaml config information. The goal was to allow GPT to interact with these tools and if it made a mistake, we could learn from it and correct it.

But how do we reduce these mistakes? The answer lies in providing it with sample code to work from and continuously updating it. I've simplified my approach based on prompts and have seen some interesting results. I've even used it to help understand its own workings.

GPT operates on 96 layers, each with a dimensionality of 1280 for intermediate representations. It takes in 50K tokens, processes them through these layers, and outputs 50K tokens. This process involves encoding data as "embeddings" (converting words into numbers), passing it through trained networks, and then decoding the output back into words.

While this works well for surface-level problem solving, it's not perfect for imitating AI. OpenAI has built a system that does advanced token processing, solving complex problems. But it can sometimes give unexpected results if the input is unclear or misleading.

The potential for creating plugins for GPT is exciting. Imagine a plugin that can interact with a Kubernetes stack or build a Dockerfile based on a prompt. The goal is to create a system that can work alongside humans, understanding the business needs and suggesting the next course of action. This would require it to be more aware of the business context, but the possibilities are endless.