The Blog-ification of Software Development?

The Blog-ification of Software Development?

I've been using Chat GPT and CoPilot to write code for over a year now, and I would say it's made me 50% more efficient. That’s a considerable gain. I hate writing routine code. As developers, we know how to solve typical problems like sorting items, talking to a database, generating a JSON response, building a type based on data, or writing a bunch of functions to access, manipulate, or format data. I’ve written the same boilerplate React component code a million times. How many times do we need to write code to generate or validate a JWT token?

I love that I can work with AI and have it generate all these types of routine code for me. I have it generate types. I trust it to make suggestions to improve and refactor my code. I’ve used it to convert several JavaScript projects into Typescript. That’s a huge time saver.

I’m a big fan of AI and all the efficiency it brings.

What I haven’t yet been able to do is get any LLM or project that orchestrates LLMs (Devin, GPT Engineer, etc) to write a fully formed piece of software. I’ve tried several times to give an LLM the full LTI specification along with repos of example code, and then I ask it to write just the initial part of the LTI launch. The response is typically something about how that’s a great idea, and it’s complicated, you’ll need a design, here’s how to approach the problem, and then a bit of code with comments telling me I should work on the implementation. Is something better coming?

Andrej Karpathy would be a good source to trust. He proposes a reasonable path to human-AI interaction that ends with a collaborative editor of sorts. Others are more skeptical. Gary Marcus, an AI industry veteran, seems to think we are approaching the point of diminishing returns, and Bindu Reedy, CEO of Abacus.AI, says Devin won’t be replacing software engineers anytime soon.

Using software isn’t like consuming a blog post or video or downloading an image. Installing software carries with it costs and risks - big risks, including a complete loss of security. 

In addition to an overwhelming amount of paid software, more free and open software is available than anyone could ever possibly use. It takes time to install a piece of software. Even my package managers screw up, and I spend a ton of time cleaning up something on my computer to make it work (AI could really help there). Once it’s installed I have to figure out how to use it. The cost in time is more than I usually want to deal with.

I personally wouldn’t trust software written by a person or LLM, and our company policies wouldn’t allow installing it unless it has been vetted. I suspect that as time goes on, we’ll start to see AI security bots that help analyze software before we install it. I suspect there will be a future arms race between the AI that protects us and the nefarious AI that is trying to get around that AI.

I don’t think we need AI to write more software for us to share. I don’t think we really love our software that much. Software is a means to an end. We deal with it because it allows us to do something we couldn’t otherwise do. I use Adobe Illustrator to make web graphics - a button or icon or whatever. I’m terrible at using the software. It’s confusing and not user-friendly. I don’t use it because I like to use it. I use it because right now, it’s the only way to get my computer to generate the end result I need.

We’ll be less interested in asking AI to write software for us, and instead, we’ll ask it to do whatever it is we need to do to get to our end goal:

  • Make me a button. 
  • Make it green. 
  • Figure out an icon that means download this content. 
  • Now, is this button accessible? Does it make for a good user experience?
  • Based on the data in our analytics, is anyone clicking on the button?

That last question is the one I’m excited about. It’s the one where we give AI access to some pile of data and a tool to talk to that data. Of course, that implies that I trust the AI not to do bad things once I’ve handed over the keys.

I made a return to Amazon the other day. A package they sent me was damaged, and so I asked for a replacement. In the past, I’ve spoken with a human who asked me some questions and then they would determine the appropriate course of action — give me a discount, send out a replacement, etc. This most recent return was all an AI chatbot, including authorizing a replacement to be sent out. 

Think about that. Amazon trusts its AI enough to allow it to respond to customers and ship out products. I think we’ll start to see more need for services that let AI speak to our data or APIs while controlling the level of trust we grant to the AI.

I don’t need AI to write me a huge application to plan my next vacation. I need AI to just plan my next vacation. Whatever LLM I use is going to need access to lots of APIs and data to figure that out for me. The GPTs that Open AI lets you add to their marketplace are a first step in this direction and might be a backdoor for it to discover and gain access to a lot of API data. I suspect that in the same way, SEO professionals try to get websites to rank first on Google, there is a future where some new profession fights to get the AI to prioritize their data so that when the user asks for advice on a new TV or wants to book that vacation, the AI prioritizes those who best play the game or pay the most money. Everyone with a silo of data, a database of products, or a pile of information that they can monetize will be clamoring to get the top AI to drink from their firehose.

I suspect we will rely on AI to write code. Some of it might be so good we save it more. Maybe the AI will realize it’s really good and ask if it can post the code into some database in the sky where all AIs can share their insights. For the most part, I suspect the code written will be one-off, throw-away code. This seems strange today given what code costs to write, but if we really believe AI can write code and it can write a bit of code specific to my need at a given moment, then write the code, run it, and then toss it into the trash bin. The value of the code is what happens at the end of the code, not the code itself.

I really hope that our blogging past doesn’t predict our AI future. With blogs, we had the opportunity for individuals to own their own data and control their photos with the freedom to speak out with their own voice. RSS feeds and readers gave us community, but because of convenience, billions of venture capital dollars, and the power of networks, we devolved into Facebook, Instagram, YouTube, and TikTok. We gave up our content to algorithms designed to keep eyeballs on screens to sell ads. 

Hopefully with AI we’ll return some power to the individual, even if we all do lose our jobs in the process.

Read more