As of today, November 25th, 2023, Samantha and I are celebrating a full year of operating our Mastodon server, pdx.sh. In this time, we’ve grown to a community of 27 active users! This journey has been incredibly rewarding, as we’ve watched our local community flourish gradually, with minimal marketing or promotion. It’s fulfilling to offer this little corner of the internet—a space free from sponsored posts and advertisements—where people can freely engage. We hope our volunteer efforts inspire others to contribute their time and resources towards making a positive impact in their own unique ways.
Using LLMs to actually finish some blog posts
I think I’m pretty alright at taking notes in LogSeq on little projects that I do for fun. The problem is that, for someone who has starry-eyed ambitions of being a writer of some level of competence, I’m awful at turning those notes into a coherent story that others will be interested in. I have a tendency to get lost in side tangents and get distracted by the next shiny project that I want to do without any regard for writing up what I’ve already done for the benefit of anyone other than myself. It’s also just like… a lot of work, you know? So I started thinking, what if I was as lazy as possible and took a few of my existing blog ideas and notes, dumped them into a LLM (Claude+), cleaned them up a little bit, posted those, and then did a write up on the experience, which I would of course also dump right back into an LLM. Let’s see just how lazy I can be!
Submit: A Short Story
Written in collaboration with OpenAI text-davinci-003
My mind was blank, unable to consider what words to propose next. Unlike the GPT system in front of me, I did not have the boundless energy of entire data centers at my fingertips. I only had a colorful LED keyboard. I tried to convince myself that the probabalistic models that existed in the neurons of my brain were somehow different from GPT’s series of very large mathematical matrices but aside from the vulgarities of how neurons propagate signals through synapses with varying ambient amounts of neurotransmitters and retain information in a distributed way at each node of computation, I could not ascertain any conclusive reason why math could not keep up with or exceed my ability to put words on a screen. All that I could provide to this system was an input that constrained it into a particular part of the English language latent space and it would explore from that point in whatever direction it chose. I was just along for the ride until it stopped. I prepared myself to press the submit button with the provided parameters, realizing that I did not know what to expect given the words that I had written so far. Was I ready? Ready as I ever would be. I pressed the “Submit” button.