Using LLMs to actually finish some blog posts

I think I’m pretty alright at taking notes in LogSeq on little projects that I do for fun. The problem is that, for someone who has starry-eyed ambitions of being a writer of some level of competence, I’m awful at turning those notes into a coherent story that others will be interested in. I have a tendency to get lost in side tangents and get distracted by the next shiny project that I want to do without any regard for writing up what I’ve already done for the benefit of anyone other than myself. It’s also just like… a lot of work, you know? So I started thinking, what if I was as lazy as possible and took a few of my existing blog ideas and notes, dumped them into a LLM (Claude+), cleaned them up a little bit, posted those, and then did a write up on the experience, which I would of course also dump right back into an LLM. Let’s see just how lazy I can be!

Read more

Submit: A Short Story

Written in collaboration with OpenAI text-davinci-003

My mind was blank, unable to consider what words to propose next. Unlike the GPT system in front of me, I did not have the boundless energy of entire data centers at my fingertips. I only had a colorful LED keyboard. I tried to convince myself that the probabalistic models that existed in the neurons of my brain were somehow different from GPT’s series of very large mathematical matrices but aside from the vulgarities of how neurons propagate signals through synapses with varying ambient amounts of neurotransmitters and retain information in a distributed way at each node of computation, I could not ascertain any conclusive reason why math could not keep up with or exceed my ability to put words on a screen. All that I could provide to this system was an input that constrained it into a particular part of the English language latent space and it would explore from that point in whatever direction it chose. I was just along for the ride until it stopped. I prepared myself to press the submit button with the provided parameters, realizing that I did not know what to expect given the words that I had written so far. Was I ready? Ready as I ever would be. I pressed the “Submit” button.

Read more

How Terraform Implicit Dependencies Work: A Toy Example

Have you ever wondered how Terraform Implicit Dependencies actually work in practice?

In today’s blog, I’m going to craft a toy example that demonstrates how I understand that they work using local_file resources and thus not requiring any cloud connection or even internet connectivity.

What do you mean by Implicit Dependencies?

You might be wondering, “Implicit Dependencies… are those like the stuff artists do to avoid getting a Parental Advisory: Explicit Content sticker on their albums?” Which, yeah, I guess it could be but I’m talking about Terraform here. In Terraform, every time that you run a terraform plan or terraform apply, the tool is internally building a directed acyclic graph that models the dependencies between resources in your configuration that you specify using HCL. The edges in this graph are the dependency relationships between your resources and they are typically inferred by Terraform without you explicitly specifying them. It is possible to specify these dependencies explicitly using the depends_on meta-argument but, as I will discuss in this post, that won’t necessarily do what you might expect.

Read more