The Internet of Bugs: 5 Ways AI Will Create New Software Engineering Jobs
Introduction
The Internet of Bugs is a term used to describe the increasing presence of software bugs and vulnerabilities in the digital world. In this article, we will explore five ways that Artificial Intelligence (AI) will create new opportunities and jobs in the field of software engineering. My name is Carl, and I have been working in the software industry for 35 years. I started this YouTube channel to answer questions from college students and share my insights on the industry.
New Startups
Historically, setting up a website required the expertise of a technical professional, making it expensive and limiting access to only larger companies. However, with the advent of tools like WordPress and Squarespace, the barrier to entry for startups was significantly lowered. Similarly, AI tools such as Large Language Models (LLMs) can now help non-technical founders launch online businesses with minimal technical expertise. As these startups grow, they will eventually need to hire software engineers and IT professionals, creating new job opportunities in the industry.
Automation for Productivity
AI tools like LLMs can significantly increase the productivity of software developers by automating certain tasks. While automation may lead to job displacement in some cases, it also opens up opportunities for developers to create more products and features faster than before. The low cost of AI tools compared to human labor makes it feasible for companies to invest in technology that can boost their output and drive innovation.
Customizing LLM Models
One of the key roles for future software engineers will be to customize the outputs of LLM models to meet specific business requirements. By understanding how LLMs work and leveraging tools like Stable Diffusion and ComfyUI, engineers can tailor the outputs to align with the needs of their organizations. Learning how to customize LLM models will be a valuable skill for software engineers in the future.
Debugging LLM-Created Code
Debugging is a critical aspect of software development, and LLM-created code will present unique challenges for engineers. Due to the complex and automated nature of LLMs, debugging their outputs will require a deep understanding of the models and their limitations. Additionally, regression bugs and adversarial attacks on LLMs pose significant risks that engineers will need to address. As such, debugging LLM-created code will be a crucial job function for software engineers in the AI era.
Adversarial Attacks and Security
Lastly, the rise of adversarial attacks on AI models, including LLMs, poses a significant security risk that software engineers will need to address. As malicious actors exploit vulnerabilities in AI systems to manipulate outcomes or extract sensitive information, the need for skilled engineers to defend against such attacks will grow. Understanding and preventing adversarial attacks will be a critical focus for future software engineering jobs.
Conclusion
In conclusion, AI technologies like LLMs have the potential to revolutionize the software engineering industry by increasing productivity, enabling new startups, and driving innovation. While AI tools offer many benefits, they also present unique challenges that engineers will need to address, including debugging complex code, customizing models, and defending against adversarial attacks. By staying informed and adapting to the changing landscape of AI in software development, engineers can position themselves for success in the evolving digital world.
Heya, i noticed the audio quality isnt great, would you mind having a look at audio enhancers, such as adobe podcasts? Thanks!
I guess any emergent technology goes through a hype cycle. Real utility starts after the hype has peaked and then crashed.
The only problem with technology people causing people to lose jobs is the friction and the disruption which upsets people's "routine" and "normal career progression" – forcing them to adapt quickly, learn new skills, transition roles, move companies or locations etc.
other than that – from a broad perspective, technology always has simply meant increased efficiency, and humans and companies always used that increased efficiency as an opportunity to do more, not less: create more, invent more, hire more, etc.
if every company needs less engineers, that just means there's room for more companies to exist – and each of them doing more than was possible beforehand.
of course, in the "ultimate" destiny of humanity – EVERYTHING could be automated so that human labor as a whole would be unnecessary: we could feed, heal, and give living spaces to humans without requiring them to labor for those products and services – i.e. "star trek" style socialism.
but we're still quite a long way from there, and on the way there we'd likely have more revolutions, disruptions, and friction.
I may be a unique case, I've been trying to teach myself to code for many years and I struggle to solve even the simplest coding problem. Yet with GPT4 I've been able to make a lot of progress on my ideas, mostly because I just want to do what others have already done but with my own twist. Without access to an llm I'd have made no progress at all.
I definitely appreciate the advice on learning to tweak llms to produce output for specific types of business, that gives me something tangible to grasp and focus on in this space as a less technical person.
It kinda sounds like you haven’t even used ai/ml for code assist.
Video request- no more jump cuts, it's human to pause between words and sentences 🙂 Even just the audio is jarring.
Regarding the topic, there are 2 pretty simple reasons I'm not afraid of it. One, every single damn time some new productivity tool came out, all it did was create more work. It might have killed some jobs in the short term, but in the long term it created orders of magnitude more, usually. And two, anyone who has worked in corporate America knows the huge gap between management and execution. A lot of the time management doesn't even know what it wants. "AI" won't change that.
Personally, I think we will lose a lot of jobs. There are currently so many people who study computer science. What is going to happen is that the best of them are going to get IT jobs and the rest will have to look for something else. Eventually after some time, the amount of computer science students will decline and we are going to achieve the perfect balance. IT jobs are going to be normal job which require a degree and getting into it with a bootcamp is not going to work.
Saying that we will have more jobs is straight up insane considering AI will improve by a lot and be able to find majority of bugs in matter of seconds
Thanks
wow…im shocked finally a person with a reasonable take on technology. 100% agree its going to be coding on top of AI that's going to replace traditional coding but coding won't go away since it is a very good way of creating deterministic outcomes and those outputs are always changing.
I feel like I can rely on your opinion a lot. Thank you for giving me a standard thought.
This is the most informative AI video I've watched so far.
AI however, is in the mean time, killing my chances, I hope it gets better sooner. I'm tired of of having skills that I can't put into practice.
The LLM attack vector piece is very interesting. All these companies exposing LLMs via chat bots etc are going to face issues.
LLMs are great at parsing, like really good. Got an annoying list of badly formatted stuff? Get an LLM to parse it. That’s a backend tool that is less exposed to an end user. You might even conceptualise of it as a parser, but behind the scenes it’s an LLM.
Now imagine that parser becomes a core part of your input handling system. One day a customer uploads a file that passes all the virus scans, but intentionally attacks the parser LLM.
This gets even more interesting if you have LLM “agents” with the ability to operate reasonably autonomously. Maybe the parser LLM talks to an agent that is able to setup billing processes… you can see where this is going.
And btw, having spent several days recently debugging handwritten parser code, I’d gladly hand that off to an LLM. There is just a tradeoff to consider.
Great video, got me thinking! Thanks!
Great video man. I'm taking Cybersecurity this semester and getting into all of the different types of vulnerabilities was pretty eye opening, especially when you know how common some of the vulnerable coding practices are. I have the same concern about AI being used like a weapon, but I guess we'll just have to wait and see. 🙂 Your insight is appreciated!
11:19 Cool, still hope for those of us interested in QA
If you are not in the financial market space right now, you are making a huge mistake. I understand that it could be due to ignorance, but if you want to make your money work for you…prevent inflation
Buen contenido, y buenos insights. Gracias!
My man
"It looks big, so I probably should repeat all the phrases I've heard and randomly extrapolate" – every second YouTub video out there.
It isn't like we've seen all kinds of "opinions on the price of NFT" videos all over the place just 2 years ago, right? And it isn't like Minsky dumped the whole AI field, because he "knew" connectivism was "not gonna work", right?
Jeremy Howard and Steve Brunton are probably the only 2 persons I know who actually KNOW what they are talking about.
The rest? They seem to be too busy talking to even realize they have no idea where the words are coming from.
Seriously, stop and think for a second, about what you've just said.
Do you understand how LLMs work?
It's not a BASIC program.
It's not an Arduino digital logic board.
You can not make a model in your head, and predict what LLM is going to output when you give it some specific input.
Then how can you reason about the boundaries of the outcome with the current AI cycle?
The only prediction we can make right now is that it is possible to create intelligence no less cognitively capable than humans, by proof of existence of us, humans (unless you believe we think with spirits and not atoms).
When will it happen?
What it would look like?
The best bet is – not tomorrow, and not much different from today, we're still not flying to Alpha Centauri, S-curve is a bit..
i can see you growing big, please don't change…. you are doing an amazing job
Thanks to the algorithm for finding this video. I absolutely agree. And will share personal experience. I am software enthusiast with basic skills. I know the general patterns of MERN stack. Don't kill me. I know JS is pain. Decided to build a fullstack web application with the help of openAI chatGPT 3.5. I had to guide it through the whole process of modulation of code. It always tries to spit all the code blocks in one huge file. And never reuses functions. At the backend, it never ties security with validation checks. Even when asked to it slacks. At the frontend the whole concept of reusability of components is missing. And it creates code that is not optimized. Always tries to find a shortcut and forgets dependencies, previous logic etc. Whats the point. I had to know the design patterns and oversee the process. Then to debug and refactor. This is a practical example of how we can use AI for prototypes but for now good code cannot be commited without the supervision of a human. Because we have that level of abstraction and conceptual thinking. I think the short term horizon is speeding up startup prototypes, doing chores etc. Which will be a great boost for developers' productivity and overall business opportunities for startups. Which is great. Similar to how the agriculture is now mechanized and the production scaled in the last 70y. As humans we should always be adaptive and creative. That is how we subjugated the planet( unfortunately, but that is another topic). Not by fighting a tiger in 1v1 bare knuckle match. Sorry for the long post.
Thanks for putting a perspective from yours years of experience.
What if llms start promoting other llms and start building some kind o f skynet that would be fun to watch.
Is this evn possible?
Thanks for the practical, in-depth analysis of the situation. Subscribed!
0:00: 🤖 AI's impact on software engineering job opportunities and advice for aspiring engineers.
2:17: 💻 Evolution of website creation from manual setup by experts to automated platforms, democratizing access to websites.
4:46: ⚙️ Impact of AI on software engineering job creation
6:57: 💻 Advantages of software development in terms of productivity and cost efficiency.
9:37: 💻 Customizing output of LLM models to match specific business needs is crucial for software engineers.
11:55: 💡 Challenges of debugging AI-generated code compared to human-written code.
14:05: ⚙️ Challenges of debugging code with foreign design patterns from different industries.
16:22: ⚙️ Challenges with using LLMs for large code bases and potential bugs in customization.
18:52: 💻 Impending chaos due to unknown exploit vectors for AI-generated code.
Timestamps by Tammy AI
There's so much cope here.
You don't need programmers if you can just tell the AI to create the app by telling it what you want the app to do.
Also, ChatGPT 4 scores 155 on IQ tests. And its reasoning capabilities are superior to the vast majority of people. Your choice in naming it "LLM" instead of "AI" makes no difference in the real world.
Great video, I’ve had similar thoughts! The future is bright. Subscribed!
I would love to see videos on customizing LLM models!
11:45 That’s right. Many this big tech companies talking about AI as a such godly ”being” but can’t even create properly working Windows or Android OS. Every software comes with many bugs and issues.
Hank Schrader is a software developer 😊
I wish more people had common sense like you do. Many startups don't have proper knowledge and they go for AI to replace people's jobs and get burned too.
Don't think they are LLMs anymore as they are transforming into MMMs, i.e. Mutli Model… Not me saying that but guys like the head of Deep Mind and that guy running Nvidia.