Switching from a live content management system to the most powerful editing experience for technical authors

A good chunk of my background is in journalism and CMS-enabled publishing, so when originally reviving my website I was pretty excited about building with Svelte and a nice headless CMS (I went with Payload). But I was sleeping on this.

My ambition was to extend Payload to support as many custom components and features as I felt I needed to tell stories. I began doing so, but recently switched it up a bit for my own purposes – I migrated to a full Markdown setup for content using mdsvex.

Once I was up and running with a CMS again, it wasn’t long before I realized a few things:

  • Being able to post to my website on the go is nice, but it’s just me – I don’t need real-time posting on the go
  • I’d built a way to enter Markdown into the CMS and I was using it almost exclusively
  • Where Markdown won’t tell the whole story, mdsvex allows me to import and render Svelte components into my normal Markdown content (If you’re familiar with the React ecosystem, it’s like MDX, but for Svelte.) – I was underestimating the value here

Rather than burning my free time customizing the CMS to handle custom data/content and send it over the wire every time I want to create new storytelling functionality, now my content comes from Markdown files that can nest components. And now I can even more easily prerender my pages at build time, allowing them to be served as quickly as possible.

I don’t need to support non-technical authors here. Why spend time doing the extra work?

So now to publish, the process goes:

  1. Write content, including any custom components necessary for the post
  2. Check in the code/content
  3. Build & deploy

Because we’re working with a codebase here, this may not scale well across a typical organization. But for the technically-inclined (and staff who are willing to learn a bit of Git and Markdown), it’s quite powerful.

This move is bittersweet – Payload is everything I want in a developer-first CMS, and it’s my current pick if the choice in CMS for a new project is left up to me – but I know this is the right move for me as a lone technical author publishing on my personal website. Today, I still use Payload for my custom analytics, form submissions and any other data dorkery.

Let’s dive deeper.

New to Markdown? Click here for a quick summary.

As its creator says on his website, “Markdown is a text-to-HTML conversion tool for web writers.”

If you’re an experienced developer or publisher, you know that the tags with which we mark up our pages have meaning for search engines, screen readers and any other bots ingesting the content. So it’s important that the author be able to convey that a headline is a headline, a paragraph is a paragraph, and that a blockquote is a blockquote, etc. Markdown gives us a way to wrap content in the intended tags, but with a simpler syntax that makes the Markdown document itself a bit more human-readable than a fully marked-up HTML page.

It may not sound like much, but it adds up to less typing, and more focusing on the content. The Markdown files can be colocated with the code for the website, and compiled to HTML for use within the website. Usually we just compile them ahead of time so they can be served as static files – no code runs on the server. It just sends the pre-built, browser-ready pages as people request them.

If you were authoring your content directly in HTML, you’d have to write a lot of markup (the ‘M’ in HTML, for the uninitiated) so the browser would know how to display your things. Here’s a quick example:

<h1>This is a top-level header</h1>
<h2>A second-level header, which is almost always styled to look smaller</h2>
<p>Every paragraph I write needs to ultimately end up in a <code><p><code> tag, just like this one. In my CSS, I'll target all paragraph tags to make them look the same.</p>
<p>It can be easy to see how adding all of these tags individually can become messy. It's not something we often do manually. <a href="https://google.com" title="Check out google.com!">Links can be particularly cumbersome</a> if you need a lot of them.</p>

We could just drop that HTML into a page and publish it. But by using markdown, then ‘compiling’ it into HTML in a separate step, we can more cleanly author and edit our content in a more human-friendly syntax. Here’s the Markdown that would produce the above HTML:

# This is a top-level header
## A second-level header, which is almost always styled to look smaller

Every paragraph I write needs to ultimately end up in a `<p>` tag, just like this one. In my CSS, I'll target all paragraph tags to make them look the same.

It can be easy to see how adding all of these tags individually can become messy. It's not something we often do manually. [Links can be particularly cumbersome](https://google.com "Check out google.com!") if you need a lot of them.

If you’re new to Markdown and want to try it out, Gruber’s website is a good place to start.

Being able to use Svelte within markdown files allows me to tells a story however it needs to be told

As I mentioned already, one of the main benefits of this approach is the ability to pull in custom components to compose a page or tell a story. This includes more corporate, business-objective-ish things such as:

  • newsletter signup forms (or other forms, such as surveys)
  • ads in the middle of the story (we love those)
  • calls to action
  • dataviz/graphs
  • calculators

You get the idea – if you can cobble it together with JavaScript, HTML and CSS, you can easily use it right alongside youre prose. If I want to create some sort of contrived, interactive pizza calculator and pull it into this post, nothing is stopping me.

🍕 How much do you like pizza?

50% Pizza

Based on my highly-scientific calculations, you are approximately 50% pizza.

Maybe I’m writing an entire post on Pizza, and want to present my personal findings with Chart.js.

I inlined the dummy data for that graph, but I can just as easily request data from elsewhere. If I were writing a soft piece about how great Cincinnati is, I could pull the local brewery data from openbrewerydb.org and present it in a little widget, right next to my content.

Tomorrow is Thanksgiving here in America. What if I’m feeling festive, and want to cut a turkey loose across my page, right here in between paragraphs? Hold my beer.

You get the idea.

For slow-churn data, you can build and pre-render simple static APIs

With this approach, our Markdown files are serving as the data source for our content. Joy of Code has a fantastic post about building with mdsvex, and it contains a great explainer on building an API endpoint for your content, which can be used elsewhere on your site, or externally. Endpoints can also feasibly be prerendered at build time, making them even faster.

Imagine you’re building a website for a private software company that has a portfolio of products, each with version numbers, price, product name, description and other data that need to be updated upon release. Elsewhere on your site, you’d almost certainly mention the product version number and price, and want to update it globally. Maybe partner companies and resellers want to be able to check for new product releases so they can update their own websites, or develop their own timely marketing push for your products.

With content and static data releasing in tandem with the rest of the website’s code, we can cover all our needs, and update everything in one simple code release.

This could all be done with arbitrary content/data in the frontmatter, and the actual Markdown content could even be used to house release-specific content, such as the marketing-cleansed changelog, or other release details to use on a page.

It’s also worth mentioning that this can be done with different types of static files, such as CSVs, and can be combined with the more typical content delivery and ingestion strategies. If you need to support ongoing, real-time publishing (say, for the blog and other pages), you can get the best of both worlds. And if you have no need for the content part, Vite makes it trivial to import JSON files for the same purpose.

Proper caching strategies in front of dynamic (i.e., they pull data from databases or other data sources) API endpoints probably makes more sense in most cases, but for slow-churn content, static files as a data source aren’t quite as crazy as it may sound at first.

Looking forward: Your Markdown content can easily go with you when it’s time to rebuild

It’s inevitable – eventually you will want to build a brand new website, and you’ll likely be switching some of the underlying tech and strategy. Maybe you want to get away from Markdown, migrate back to a CMS, to the CMS’s database via an import or lower-level migration.

I’ve seen some ugly data migrations over the years, but if I were beginning one today, I think I’d be happy to see that Markdown files were part of the source data that needed to make it in.

There’s always a risk that malformed data has made it into these files (humans ‘gon human), but when developing your .md file against a running local copy of the codebase, the feedback you receive as you compose is immediate. You see how it’ll function live, in production and all lower environments. If you messed up badly enough that the site will crash on your Markdown file, the only way you won’t know is if you (or someone else) are not testing your work.

Migrating away from this setup once it’s become the wrong choice for your organization would probably look different depending on where/what you’re moving it to. You could build an API for the new system to consume, or extend the new one to do the importing for you. But this also highlights a potential footgun.

If you were using a lot of custom components in your content, you’ll need to either sort out how to handle those in the new system, or forget about them entirely. For what it’s worth, most CMSes worth using have developed some sort of way to do this, some directly in the rich text editor.

I was undervaluing this approach at first

MDX has been around for a while, and I knew it made components-in-markdown a thing, but I was always placing more value on being able to post from anywhere, at anytime.

I’d done a lot of thinking about it over the years, but I’d always come up with a reason to stick to my CMS-y ways. Over time I’d read about startups choosing Docusaurus for their website and documentation, leading to much thought as to why.

I think one reason for this is similar to my own: To minimize the work you must do to publish anything, no matter how atypical. For a technically-inclined solo author or small team, I don’t think you can beat this, unless you simply don’t need or want the expressiveness this nets you.

But you may be able to see how this might not scale well for growing teams. New hires inevitably happen, and not all marketing and creative folks will feel great about learning Markdown and the basics of Git.

Using Skeleton UI's dark mode toggle programmatically from within other components

There’s a whole ton about developing in the real world that I didn’t learn until I got to work with other developers. One of those lessons is relevant here: Never be afraid to dive into the source code of the open software that your software depends on. If you think a function call should be working, but it isn’t, you can — and should — go look at what it’s doing with your input!

I’ve known from first-hand experience that the folks behind the Skeleton Svelte UI library have not been shy about making sure any state that Skeleton components need/use is managed in Svelte stores. They expose this quite often in the documentation, however, I don’t see any mention in the dox about how to work with the user’s dark mode state programmatically – i.e., whether the user has dark mode on or off.

I was thinking about different ways I could handle changing between two different backgrounds depending on whether dark mode or light mode was on, which led to wondering whether I could check the user’s dark mode choice from Javascript code in case someone asked me to do something beyond visual changes – think something like having a chat bot greet the user differently based on whether they have dark mode on or off. To do this, I’d have to figure out where the <Lightswitch /> component was storing this value.

Sure enough, Skeleton stores this value with one of its own Local Storage Stores. Colocated with almost every one of its Svelte components is a Typescript file that exports functions that are exclusive to that Svelte component. This lightswitch.ts file is one that supports the Lightswitch component, which we can see is much more robust than it seems on the surface!

If you checked out that code, you could see on line 3 that the component imports a whole bunch of exports from that file. One of those tasty chunks of Typescript is a Local Storage Store called modeCurrent, and it simply stores a boolean value: true or false.

We can see the function is imported on line 5, then right below that we have a few stores defined:

// Stores ---
// TRUE: light, FALSE: dark

/** Store: OS Preference Mode */
export const modeOsPrefers = localStorageStore<boolean>('modeOsPrefers', false);
/** Store: User Preference Mode */
export const modeUserPrefers = localStorageStore<boolean | undefined>('modeUserPrefers', undefined);
/** Store: Current Mode State */
export const modeCurrent = localStorageStore<boolean>('modeCurrent', false);

The major key 🔑 is the export keyword: it’s exporting the stores after they’ve been created, meaning we can import them elsewhere. The keyword alone isn’t what makes the next code line happen, though – it will get re-exported at the package level for that, otherwise we’d have to import from the lightSwitch.ts file directly.

So in some random text component – nested quite a few levels deep into the onion – I imported this store for my own usage:

import { modeCurrent } from '@skeletonlabs/skeleton'

This example is redundant, as I’ll get into at the end, but from here, you could do a reactive statement to create a class to slap on the component:

<script>
  // we access a store's value with $storeName:
  $: bgClass = $modeCurrent ? 'graph-paper' : 'circuit-board'
</script>

If you haven’t seen this before – in the code above I use a ternary operator to have { bgClass } evaluate to graph-paper when it’s light mode, and circuit-board when dark mode is on.

We could also use a derived store to calculate derivative values based on light/dark:

<script>
  import { modeCurrent } from '@skeletonlabs/skeleton'
  import { derived } from 'svelte/store'

  const welcomeMessage = derived(modeCurrent, ($isDarkMode)=>{
    // process the value and return the bg class or other variable you need to do the job
    return $isDarkMode ? 'Hello, light mode!' : 'Pssst... hey, over here'
  })
</script>

<div class="">
  <h2>{ $welcomeMessage }</h2>
  <slot />
</div>

Then you would, as always, read your derived store’s value elsewhere with the dollar sign syntax: $bg

but wait… couldn’t I just use the dark: prefix?

Yes. You’d really only want to do this if you wanted to do dynamic (ie, with Javascript) things in response to the user’s choice. You wouldn’t normally want to do JS things in response to dark mode, especially in a Tailwind-enabled app. You’d typically just be changing colors.

In typical usage of dark mode within Skeleton and/or Tailwind (which Skeleton is built upon), There will be a .dark class on the body for you to use in your CSS rules to do dark-only styles, and there’s a dark: prefix you can use with your classes to indicate a class is dark-mode only.

<script>
  // doing other things up here for my component that have nothing to do with dark mode
</script>

<div class="text-slate-500 bg-white dark:text-white dark:bg-slate-900">
  <p>my content</p>
</div>

<style lang="postcss">
  /* we could also do it from component styles, using @apply */
  div {
    @apply text-slate-500 bg-white dark:text-white dark:bg-slate-900;
  }
</style>

Shedding unwanted requests at the server layer

For as long as websites exist, the robots, bad guys, and wannabe internet supervillains will spam our servers with junk requests in hopes of finding known attack vectors.

Due to a variety of circumstances – including, but not limited to, poor decision making, technical debt and aggressive backward compatibility policies (please, just leave the old, bad/deprecated API behind) – server software of all flavors will inevitably leave itself vulnerable in some spots.

Looking through my server logs, I can see these dubious dunces trying to ping files that would tell them about a widely- and publicly-known soft spot on my server that they can further dig their grubby little fingers into.

Let’s walk through a quick example in xmlrpc.php, brought to us by Wordpress.

xmlrpc.php

I’m not actively hosting any PHP apps at all, so a request for anything.php appearing in my server logs is an obvious bad actor. In this particular case, this file is a known attack vector in the world of Wordpress.

If our most-definitely icky-breathed fellow human gets a positive hit from my server when trying to request this file, there are at least a few implications:

  • they will know the site they tried to hit is a Wordpress (WP) site, and could potentially log this note in a database somewhere for further abuse in the future (there are many other ways to identify WP sites, so this isn’t necessarily a big deal on its own, but it’s not nothing)
  • XML-RPC is present, and this known attack vector is just sitting here, pants down, susceptible to brute force attacks, among others
  • they also now know that your server runs some sort of LAMP setup (to run the actual WP software), and, while most of the software installed there is typically quite tanky, this gives more clues as to what additional items are on the menu for a bad actor to try to take advantage of (e.g. one could try to brute force SSH access to your server, poke around to take advantage of lazy/bad system administration, try to get in via FTP… this could be its own post)

So this seems to be a pretty juicy vulnerability, and, frankly, it’s a bit wild that it’s turned on by default. This isn’t super relevant for me, as I’m not running a WP site. I would, however, like to keep the riffraff from laying their diddly hands upon my precious applications.

let’s turn them away at the gate

Until I do something to protect myself, when one of these party-pooping good guy haters want to try to poke at (read: make a request to) my frontend for xmlrpc.php or any other file that may tell them about soft spots, my firewall, not knowing any better, passes the request along to the server layer, which also doesn’t know any better and passes it on to the application layer. Fortunately, Sveltekit has no idea what to do with PHP, and it’s dead in the water there.

I’m not OK with my babies having to deal with the stranger danger in the first place, though. I want to send them away at the outer layer of the onion; as early in the request as possible, I want to check a list known attack vectors, and send ‘em packin’.

Better yet, since I’m not currently using any PHP at all, I can just keep an eye out for all requests for .php files, and send ‘em packing right then and there.

The server layer is a good place to do this. We could do this at the application, but why even bother running the extra code? Why even hand the request off to the application at all, if I can do the job with Nginx? At any sort of scale, doing so could add up to a pretty beefy savings in bandwidth and work that your server has to do.

An easy way to configure Nginx to turn away all PHP requests is to add a couple of simple lines to our Nginx config. On Linux machines, this is usually somewhere in /etc/nginx. We need to add a location block within our server block.

server {
    # ... boring SSL stuff and whatnot

    # a simple regex to match all PHP files, and deny access
    location ~ .php {
        deny all;
    }

    location / {
    	# ... the real application 
    }
}

And that’ll do it! Restart your Nginx process (sudo systemtctl restart nginx or sudo service nginx restart should do the trick if you’re on Debian), then try sending some request to your site for anything.php and verify that it’s getting turned away. Your Nginx instance is now basically digital Marshawn Lynch, leaving a trail of dejected, unwanted .php requests in its wake.

This is just a regular expression (regex), so if you wanted to modify this to only block specific files, you can tweak this regex and make it as complex as it needs to be to match your use case, or introduce additional location blocks.

A fond memory with my favorite command-line text editor: Vim

Bram Moolenaar – the creator of Vim – died a few days ago, making it feel like as appropriate a time as any to share a fun story that he enabled with his software.

Vim isn’t my main text editor, but it is my main text editor when I’m SSHd into a server and need to edit a file from the command line. Thank you, Bram!

It was a year or so into my first job out of college, and I was on a family vacation in North Carolina. I was with my dad, dragging fishing gear to a canal we wanted to try dipping a line into, and I started getting some panicked texts from my friends at work.

We’d recently launched our first Drupal site on a shiny new virtual private server (VPS) running FreeBSD with no desktop UI, and had a product or raw material shortage of some sort that needed to be up at the top of the website on an emergency basis. I hadn’t built the functionality to do that yet, but it could be added easily enough.

This is a small marketing department, not some large production with an agency managing the brand and all that. I was the only developer doing anything on this thing, so I didn’t have a problem making a quick change on the server and cleaning it up when I got home. Neither did anyone else.

This particular site being Drupal, I could simply add a new region to the theme where we wanted it (at the top of the template), then let someone else with a user account know where to put the content.

So I put down my stuff, logged in to the server via SSH using Prompt on my iPhone (which is a product that absolutely slaps, by the way, if you’re the type of nerd who wants to SSH into servers from your iPhone), added the new region and cleared the caches with Drush (which also slaps). Picked up my stuff and caught up with my dad.

Obviously, leaving a dirty file on my production server (and, uh… testing on it, for that matter) while I’m out on vacation isn’t ideal, nor is this what I’d call a typical use case for a CLI text editor, but we needed a resolution, and I could deliver quickly thanks to the connectivity provided by the phone, and a command line text editor waiting for me on the server.

A more common/practical use case might be the editing of configuration or other files on the server, e.g. editing Nginx or Apache config to spin up a new website, or tweaking firewall config.

If you host your own websites, or have ambitions of doing so, I strongly recommend familiarizing yourself with Vim, Nano, or another CLI editor. In the spirit of doing some Vim evangelism to future generations of computer dorks, I’ll cover here some of the essential parts I use, personally.

I’ve worked with quite a few developers over the years who use it as their main text editor, and if you really take the time to learn it and lean into it, it can be wildly powerful. Usually this developer is way better at regex than I am, and with far stronger opinions about how their text editor behaves. I am not this developer (at least not when it comes to CLI text editors), and don’t usually get too far away from out-of-the-box defaults unless something is really in my way. These are the bits of Vim that I use enough to memorize them.

The important parts of Vi/Vim

There are so many other guides out there for Vim and other CLI text editors, but my aim is to make a brief, practical guide of only the minimal parts I use myself as a software-first person who dabbles in devops/sysadmin.

I like Vi and/or Vim because it was what I was originally taught, and it also seems to be preinstalled on most Linux servers I’ve worked with. It’s almost certainly one of the first things cut from those leaner, smaller Linux flavors that end up powering people’s Docker apps, but if you’re deploying Docker containers and find yourself needing to SSH into a live container to edit a file, I imagine you did something pretty wrong somewhere (not that I’ve got room to talk).

Insert vs. Normal/Command/Interactive

The first thing to know about Vim is that it has two main modes: Insert and Normal (I’ve seen it called a few things, but we’ll roll with this). Normal mode is sort of like switching off your keyboard’s usual functionality, and turning it into an alternate keyboard with buttons specially tailored for editing text. More on this shortly.

One exits Normal mode and enters Insert mode by simply hitting the letter ‘i’ on your keyboard, allowing you to type/edit text like you normally would. It should say ‘INSERT’ somewhere in the bottom left of your screen. Out of the box, this is a minimal version of the text editing experience you know and love from elsewhere, even if the setting is a little more barbaric.

When you’re finished editing and want to quit (or do anything else other than edit), you hit Esc once to exit Insert mode and go back to Normal. From here, many of the keys on your keyboard have new superpowers, allowing you to copy or delete entire lines, navigate to different parts of the document, or search it.

Delete an entire line

At some point in working with config files, deleting entire lines comes in handy. This is a double-tap of the lowercase ‘d’. From Normal mode: dd

Delete a single character

Carving out a single letter or two also comes in handy rather often for me. In Normal mode, move the cursor (with the arrow keys, although a Ctrl + click may work) to the character you want to ax, the hit x.

Search

Vim is at least as capable as any other editor out there, but it’s almost never obvious how to do anything. Searching a file is as great an example of that as any. You can kick off a search from Normal mode simply by pressing / and typing your string. From there, you can hit Enter or n to move to the next result.

There’s actually a somewhat baffling amount of power just here when you factor in that you can use regular expressions and such here. Most of that is very need-to-know, and I dig into documentation if I need to do something more involved. Usually if I’m in a file with Vim and using this, it’s just a quick check for a specific thing I expect to be there, and never replacing strings or anything like that.

Saving

In this world, we think of it less as ‘saving’ and more as a ‘write to disk’. So rather than ‘save’ being the operating word, or clicking on a floppy disk, we issue a ‘write’ command. We do this from within Vi or Vim by using Esc to get back to Normal, then typing a colon, followed by a ‘w’, then Enter.

Yes, we are a far cry from clicking on a floppy disk icon.

If it doesn’t let you saved, it’s probably because you’re trying to edit a file you don’t have adequate permissions to edit. If you’re editing server config (e.g. Apache/Nginx files in your /etc directory) and haven’t assumed root permissions when editing the file (i.e. with sudo vim file-to-edit.conf) it will assume you’re not allowed to do what you’re trying to do.

Quitting

As long as Vim continues to exist, so too will jokes bout quitting Vim exist. It’s actually pretty easy. It’s like saving, but with the letter ‘q’. Hit Esc to get back to Normal, type your ‘q’ and hit Enter.

If you try to quit without saving, it’ll stop you and tell you about your dirty file. If you want to quit anyway, simply add an exclamation point at the end. :q!

Veteran Vim users often write and quit at the same, which simply combines the two: :wq

Copy/Paste

While I don’t use it quite as often, knowing how to copy and paste never hurts. But in Vim, we don’t copy and paste. We yank and paste.

From Normal mode, y will ‘yank’ the whole line, and p will paste it.

Try it!

  • From the command line, change into the directory within which you want to create a test file (e.g. cd ~/Desktop)
  • Start Vim for the creation of a new file by just typing vim my-test-file-name.txt
  • Press i on your keyboard to enter Insert mode and type a few lines just like you would in any other text editor
  • When you’re done, go back to Normal mode by pressing Esc, then try deleting a whole line by placing the cursor over it and typing dd, or a single character, with x
  • Save and quit at the same time by issuing :wq from Normal mode
  • On a desktop? Try double-clicking on your new .txt file to see it open like regular people do.

PayloadCMS + Sveltekit: How to add a new feature

One of my favorite things about Payload is how easy it is to add new functionality.

With any content management system (CMS), many of the new features a developer will be asked to build are a (hopefully) simple matter of getting a new data shape sending to the front end, and modifying the view layer/frontend to present that raw data in a human-friendly way.

The difficulty presented here varies depending on the CMS you’re working with:

  • Wordpress remains popular for publishing content, but as soon as your content needs become more complex, you’re looking at custom code or third-party plugins – there simply isn’t a clean out-of-the-box way to describe new data shapes
  • Although the learning curve can be steep, Drupal is amazing at enabling developers and site builders (i.e. folks who don’t really code, but know their way around Drupal’s robust and mature module ecosystem) to create custom data shapes meeting specific content needs, then querying and presenting that content
  • With headless-only content management systems (where content is served over APIs, and there is no out-of-the-box way to send full HTML pages in response to requests instead), the contract becomes more clear, since your view layer is a separate application entirely

Payload’s Blocks functionality is one of the first of its design choices that was speaking my language. It’s one of its most powerful features. Essentially, it allows you to define custom data shapes, and give your users the power to choose any combination of those shapes.

This is so ridiculously potent, particularly when you combine it with the Array field and other functionality. The Array field type on its own is the way to go for homogenous, repeatable data shapes (e.g. an image, title and blurb for an old-school slider on the corporate home page), but by using Blocks within an Array, we can let the content author choose which blocks they need to tell the story.

Every post on this site is one record in my Posts collection, each made up of a few of the usual fields a blog post needs (category, tags, date, etc.) along with one big content field that’s an Array of as many (or as few) Blocks as I need to compose my story.

I’ve already built a few of the shapes I knew I’d need:

  • RichText, containing one single Rich Text field for prose – for now, this is the only field, but I can easily add other fields in the future if I want to, say, add a checkbox to tell the front end to highlight/emphasize a given usage of this block
  • Code, containing a Select field for choosing a language, and a Code field for… well… the code
  • Media, which lets me upload and display images only (for now)
  • Markdown, which I added mainly for those simple text posts that don’t need a bunch of other things (I start most of my writing in markdown)

This already adds up to a a pretty expressive way to build pages, but I can add more options here as my storytelling needs require.

The spec: Links

Let’s add a third Block for showing an arbitrary number of links that are related to the post. But rather than simply adding raw links right into the post within a RichText Block, I want to break these out into their own Links collection. That way I can potentially add features that use these links (e.g. a timeline of news by category, showing the evolution of something like artificial intelligence over time).

This will be a Payload Array field containing one single subfield: a Relationship to my new Links collection. Each link should have Text fields for the actual link URL, title and text, along with a Date field where I can add the date the original work was published. So let’s start with a simple configuration for my shiny, new Links collection:

import { CollectionConfig } from 'payload/types'

const Links: CollectionConfig = {
  slug: 'links',
  admin: {
    useAsTitle: 'text',
  },
  access: {
    read: () => true,
  },
  fields: [
    {
      name: 'text',
      label: 'Link text',
      type: 'text',
      required: true
    },
    {
      name: 'url',
      label: 'Link URL',
      type: 'text',
      required: true
    },
    {
      name: 'title',
      label: 'Link title',
      type: 'text'
    },
    {
      name: 'originalPublishingDate',
      type: 'date',
      admin: {
        date: {
          minDate: new Date('1800')
        }
      },
      required: true,
    }
  ],
  timestamps: true,
}

export default Links

Nothing wild to see here if you’ve already looked at some Payload config – or, Typescript/Javascript, really – just a few essential text fields and a date field. Adding a separate Date field allows me to add a Link long after it was originally published, and still have it work well in whatever time-sensitive functionality I add later on the front end. Out of the box, the farthest back I could choose was rather brief, so I set the minDate to the year 1800. If I somehow end up needing to go back farther, I can always update this.

With my new Links collection all set, it’s time to create a definition for the Block, and add it to the existing Blocks in my Posts collection.

All I’m going to do here is make a Block that can be imported into a Blocks field, and it’ll itself have one solitary field: an Array, but instead of defining its own data shape here with a handful of its own fields, it’ll contain a single Relationship field pointing to the new Links collection, and will allow me to add as many references to Links as I’d like.

Clear as mud, right? It’s probably easier to just read some code.

import { Block } from "payload/types"
import { blockFields } from "../../fields/blockFields"

export const Links: Block = {
  slug: 'linksBlock',
  fields: [
    blockFields({
      name: 'linksBlockFields',
      fields: [
        {
          name: 'links',
          type: 'array',
          fields: [
            {
              name: 'linkRel',
              type: 'relationship',
              relationTo: 'links',
              // If I were building this with intent for other users to add content, I’d probably also set a `max` here to limit the amount of links that can be displayed
            }
          ],
          required: true
        },
      ],
    })
  ]
}

You may have noticed this ‘blockFields’ function I’m pulling in – I got this pattern from the Payload team’s open source code for their own website’s Payload instance. It’s a pretty straightforward function that wraps all your fields in a Group field, among other things.

For the final step on the backend, let’s add this Block to the Blocks field I’ve already got on my Posts collection.

import { Links } from '../blocks/Links'
// ... other imports here

const Posts: CollectionConfig = {
  slug: 'posts',
  fields: [
    {
      name: 'contentBlocks',
      type: 'blocks',
      blocks: [
        RichText,
        Media,
        Markdown,
        Code,
        Links,
      ]
    },
    // ... additional fields
  ]
}

With that step complete, we can now see our new Block in action.

And now for the frontend

I won’t get into the actual data fetch to Payload to get the content from Sveltekit, as that’s pretty well documented, and it looks a little different depending on which API you’re using: GraphQL or REST. But it’s probably worth mentioning that if you’re using GraphQL to query an Array of Blocks like this and are just getting started, you’ll want to be sure to read up on Union types. Basically, you need to specify which fields you want for each Block type.

Once you’ve got the data in hand, it’s just a matter of using it in a Svelte component.

For each content block in this array, I’m also making sure to ask GraphQL for the blockType. I can loop through the array of Block data, and render each one through my ContentBlock component. Svelte gives us nice, clean way to render different components dynamically at runtime.

<script>
  import { onMount } from 'svelte'

  import CodeBlock from './CodeBlock.svelte'
  import JsonDebug from './JsonDebug.svelte'
  import LinksBlock from './LinksBlock.svelte'
  import MarkdownBlock from './MarkdownBlock.svelte'
  import MediaBlock from './MediaBlock.svelte'
  import RichTextBlock from './RichTextBlock.svelte'
  import SvelvetBlock from './SvelvetBlock.svelte'

  import Row from '$lib/utils/Row.svelte'

  let calculatedBlockType = undefined
  
  export let props = undefined

  onMount(()=>{
    if(props.blockType){
      switch(props.blockType) {
        case 'code':
          calculatedBlockType = CodeBlock
          break
        case 'diagram':
          calculatedBlockType = SvelvetBlock
          break
        case 'richText':
          calculatedBlockType = RichTextBlock
          break
        case "linksBlock":
          calculatedBlockType = LinksBlock
          break
        case "markdown":
          calculatedBlockType = MarkdownBlock
          break
        case "mediaBlock":
          calculatedBlockType = MediaBlock
          break
        default:
          calculatedBlockType = JsonDebug
      }
    }
  })
</script>

<Row>
  <svelte:component this={ calculatedBlockType } { props } />
</Row>

You can see how I import all of the components I need to represent what essentially amounts to a stack of content blocks that come together to make the story. This component will render as many times as it needs to to get the job done, and if it encounters a blockType that it isn’t expecting (i.e., it’s not in my switch statement), it’ll just dump that data shape onto the page using a simple debug component I put together to stringify the JSON and drop it onto the page. This really nice when working on this stuff locally, and, my site being kinda nerd-oriented and all, I’m OK with that showing on my live site in the event that I manage to let that happen.

Now, for our final bit of frontend, let’s look at how we display these links in a Svelte component. I’ll omit the CSS, as I’m probably going to change that pretty heavily soon anyway.

<script>
  import Book from '../primitives/heroicons/Book.svelte'

  export let props = undefined
  let links // we don't need to declare this, as the line below will do that for us, but I like to include it for readability

  $: links = props.linksBlockFields.links ? props.linksBlockFields.links : []
</script>

{ #if links.length }
  <div class="links-block">
    <h2>
      <Book class="inline w-5 w-5 mr-1" />
      Related Reading
    </h2>
    <ul>
      { #each links as link }
      <li>
        <a title="{ link.linkRel.title }" href="{ link.linkRel.url }" data-published="{ link.linkRel?.originalPublishingDate }" target="_blank">{ link.linkRel.text }</a>
      </li>
      { /each }
    </ul>
  </div>
{ /if }

I import an icon I want to display alongside the title, export the generic ‘props’ variable that all of this component’s peers do, then take off of that what I need with the reactive statement on line 7.

That dollar sign JS Label syntax, which Svelte leverages as a way of telling the compiler a statement should be evaluated again when the values it depends on change. This lets me ensure that the component always has an array that is zero items or longer. I could show alternative UI if the array of links is empty, but for now I’ll just render nothing. This is an easy way to always have something in the field that Svelte can work with, even if the payload takes a while to download.

And that should do it! Notwithstanding questionable design choices, here’s our new feature in action:

A simple lesson in defaults from Blink 182’s drummer

Early in my journey as a budding drummer, the friends who’d roped me into buying a drum kit and joining the band had also introduced me to what would become another interest: all forms of heavy rock music, and all its various subgenra.

As a result, I’d begun studying metal drummers who did the things I wanted to learn to do. And like so many young folks who are eagerly jumping into metal drumming, this pretty much meant one thing: I wanted to learn the secret to being able to play high-tempo (and maybe accurate, even) 16th notes on the bass drum.

There had to be a cheat code. How is he doing this? Double strokes? Heel-toe? Heel up? Heel down?

The cheat code, as any experienced person in any discipline will likely tell you, is always practice. But that didn’t stop me from learning as much as I could about the art, and the tools used to create it. If someone was talking about, say, how X pedal line stacked up next to Y’s offering, where they put their spring tension, what type of drive, the shape of the cam, how they tuned their bass drum or any of the other odds and ends, I was taking notes.

One of my favorite broader takeaways from all of my digging into the craft came from an interview Travis Barker did with one of the drumming magazines. I’d say it’s pretty common for interviewers to ask about bass drum pedal settings, because someone in the audience will always want these details, actually-useful or not.

This Q&A (I think it was an old Modern Drummer cover story?) was no exception. And when asked about his pedal settings, Travis essentially said that he doesn’t like to toy with his pedal’s factory configuration out of the box, because if his pedal breaks, and he needs to knock the shrink wrap off a brand new one right in the middle of a show, he doesn’t want to have to fuss with dialing in a perfect spring tension, beater angle, pedal height or whatever else. This resonated with me – if the manufacturer’s products are that consistent right out of the box, and critical to your performance, why not at least consider leaning on that?

I can’t say I went the same route in my own drumming, as I’ve always customized my double pedal to be about as stiff and heavy as the hardware will tolerate. But it’s always been in my mind as an added perspective to consider, and my own personal computers are where I’ve found myself applying this idea of minimal customization in the interest of quick re-entry.

Since reading that article, enough of my system refreshes (either a new computer entirely, or formatting a system/reinstalling an OS) have been in response to disaster that I began to adopt this philosophy for much of the software I use in my daily computing. I’ll inevitably do some sort of tweaking over time, but it’ll be things like fonts, colors, shell aliases and other things that aren’t going to melt my brain if I have to do without.

It’s not like I don’t customize them at all. In some cases, it’s almost unavoidable – Sveltekit’s somewhat controversial change in routing layout comes to mind as an example where I pretty much had to hunt down the setting that makes working with multiple files more sensible.

As with most things, all users are different, and spend varying amounts of time in different apps. Someone who spends the majority of their day on the command line will certainly have much stronger opinions about colors, fonts, aliases and specific behaviors that they’re simply looking at for far longer than a software engineer who’s on the command line to fire up a local development environment, commit code, and maybe install a package that supports a new feature. But I think it’s a worthwhile perspective to keep close, especially in lines of work with deep toolboxes.

Putting the “You” in CPU

I love this, for so many reasons.

Content explaining how a CPU works is such a great candidate for this treatment: a clean, detailed, standalone site that just sort of exists to be a definitive resource that takes an intimidating concept, and makes it approachable to any other human who is simply curious enough to learn something new.

What I love even more is this sort of humble, child-like tone to the whole thing that lends itself to that accessibility. To me, it feels like a distinct human state – where the experienced grown-up part of you teams up with the insatiably-curious child you’ve been from the beginning. I think amazing work happens here. This piece was literally written/built by at least one high school student, so that checks out, but it’s a great reminder of how much better a piece can be when the reader feels like they’re learning alongside you, as opposed to being lectured.

I’ve long thought that it’d be a fun project to do a similar how-it-works piece for the whole cold boot process from beginning to idle. From raw power going into the back of the machine, to pixels finally rendering on screen.

Hello, world! After a lengthy hiatus, I'm back online.

After leaving it alone for the better part of a decade, I decided I’d missed writing enough to get serious about rebuilding my personal website.

Much of my technical background is in full-stack content management system (CMS) development, and most of the systems I’ve worked with have been LAMP stack. I’ve been fortunate to have gained a lot of great experience in this space, but I’ve been wanting to switch it up.

In my mind, end-user software development is largely about about the building blocks and tools you choose to do the job. You only can do brilliant work when you love what you do, and choosing your tools is not unlike pulling together the ideal set of Lego, K’nex, Lincoln Logs, or, if you’re from an older generation, ERECTOR, to spill all over the floor when it’s time to build something.

So, always in pursuit of the best tools, I came up with a few parameters for my new website:

  • I wanted to work with a CMS that I’d yet to
  • Ideally, said CMS would be a different language than PHP, so I can branch out of the world of LAMP and learn how to build a production server for Node apps
  • I wanted to use a Javascript component framework for the frontend application, along with goodies like PostCSS and TailwindI wanted the CMS to be a headless implementation, serving data to the frontend application over API calls
  • I wanted to use a database other than MySQL/MariaDB

So I’m talking about two applications: a frontend, and a backend. Really, each app has its own backend and frontend, but one app serves Svelte UI/precomposed pages, and the other serves up data for the Svelte to present in the more human-friendly manner you’re reading now.

If you, too, are from a PHP-ish background, this may or may not weird you out. Some of the older, more mature PHP CMS communities made varying efforts to deliver some path to headless and decoupled content delivery options, but I was never delighted by this feeling that I had to undo out-of-the-box functionality in some way to get there.

Drupal would be my defacto choice if I were targeting LAMP — IMO, it is best-in-class amongst higher-level PHP systems, and you can start building against it as a headless backend pretty quickly if you use an existing Drupal distro like Contenta. But I want to do something new (to me), here.

The content management systems popping up and thriving in the Node.js ecosystem were catching my attention. Strapi is slick, as is Keystone. Ghost was an interesting older contender that I’ve had an eye on for a while, but it quickly became one of those over-commercialized products that sort of obfuscate the path to self hosting the open source project. The idea of using Decap CMS (formerly Netlify CMS) with an embedded SQLite database is pretty cool, though I was surprised to find the admin UI was not responsive.

Ultimately, PayloadCMS got my attention, and held it. It checked all my boxes, and the more I read documentation and tried it out, the more it became clear that this was most of what I loved about building with Drupal, but it was already headless (i.e. rather than responding with entire browser-ready webpages like a traditional monolithic CMS setup will do, it simply sends your content as data for a client app to present). Payload felt like someone had boiled down the Drupal backend development experience to its purest essence.

Rather than build new content types via an admin user interface at all, Payload has the developer describing content types in Typescript configuration files. It then handles spinning up a collection in MongoDB, exposing your data over GraphQL and/or RESTful endpoints, and delivering a clean UI for content editors to add and edit content within.

There’ll be posts about this down the road, but let’s get on to that frontend.

the frontend

With Payload locked in for the backend/CMS, we’ve got instant content APIs to develop against. I need to push that data through some sort of view layer to create the end product you’re reading now.

Svelte got my attention sometime during 2.0 release. Even then, it was so intuitive and a joy to use. I realize React was born during a very different time in Javascript’s progression, but my experience with React just never felt great, and I always though being called ‘React’ without being reactive out of the box is a little silly (they know this, and joke about it).

As I got deeper into Svelte over time, I’d eventually learn that its creator had a similar background to mine, and had worked in digital storytelling for some big time publications (NYT). In an interview with The New Stack, Rich talks about the history leading up to Svelte, and his own pursuit of the best tools to do this job. I’d been one of those folks doing it in Flash. I was starting to feel like I was in the right room.

Sveltekit

If you’re not familiar: you can think of Sveltekit as a user interface machine. It’s the server powering the frontend. It gives me a whole bunch of great things, but here are some highlights:

  • a router for the entire user experience (e.g. /contact renders a page composed of Svelte components, including a form and some content)
  • the ability to decide between different rendering strategies — server-side rendering (SSR), client-side rendering (in your browser), or pre-rendering at build time. And you can choose which strategy is used for each route individually. That is cool.
  • a full stack JavaScript environment dedicated to churning out front end UI, which lets me easily make the most of PostCSS contrib (like Tailwind) and other modern JavaScript goodness
  • local development is powered by Vite, and it all just sort of works

Together, Payload and Sveltekit are just plain fun to work with. And I like the extra inherent layer of security/insulation you get by being able to have a server-to-server connection ready to utilize when the payload-in-transit is sensitive in nature.

Imagine a page in a government website where you can edit your user profile, dense with personal information. We can use Sveltekit’s +page.server.js file to make a request from one backend to the other. Then this data can be securely sent along to the user for whom you were fetching it, and the original data source was never exposed on the client.

For fetching insensitive data, you can also make calls straight from the browser. Sveltekit’s +page.js file will run on the client (and also on the server, when SSR happens), so if you’re just on a /blog page, where the content you’re requesting from your backend is being presented publicly anyway, you can skip Sveltekit’s backend and just query Payload (or whatever other data sources) from the browser.

For this site, I’ve been challenging myself to completely insulate the CMS from direct user traffic by using +page.server.js exclusively for all GraphQL requests. At the end of the day, making a request to my homepage means you make a request to Sveltekit, which fetches data from PayloadCMS (or the cache), then sends the components and data to your browser to make up the page you see. One frontend, two backends. You’ll sometimes see Sveltekit’s peers/competitors referred to as ’backends for frontends,’ which is certainly accurate.

If you, like me, come from the world of monolithic content management systems and CRUD apps, the idea of having a dedicated UI app serving your interface can be sort of jolting up front. It certainly was for me, and I’ve seen the confusion from other software engineers when I’ve tried to evangelize this transplant of the V in MVC. It’s worth mentioning that you don’t have to deliver a Svelte UI via Sveltekit (which would mean hosting it in a Node environment).

You can feasibly bake it right into, say, a Drupal or Wordpress theme. At the end of the day, it’s just JS that needs to get to the end user’s browser, which your existing LAMP stack has always been perfectly capable of.

Compared to the CMS monolith, it’s a different way to think about content, but with the right tools, it all feels so zippy, and is pure joy to work with.

©2023 Joe Castelli