My first AI music album: “The Shadow Age”

I’ve been enjoying writing songs with AI songwriting tool Suno for the past few months, and recently put together a full-length album of some of my favorite tracks so far. While AI wrote the music and provides the performance, I wrote the lyrics, which are very deep and profound. (Though two of the tracks are from old famous poems.) The symphonic metal album is free to download here (ZIP file, MP3 V0, 111.2MB) or on Bandcamp.

Don’t like AI music? Well, I’m sorry, but I’m going to create even more AI albums, bwahaha!

First impressions with the Meta Quest 3 VR headset

As I’ve blogged about before, I’ve had trouble with my programming productivity lately, a major cause being my terrible sitting posture while using my desktop due to the monitors not being situated quite how I’d like, and my chair not optimally supporting my spine. I get a sharp stabbing pain in the back of my neck and between my shoulders after about an hour or so.

I thought about getting a Steam Deck to allow me to play games away from my computer, but, after seeing a few YouTube videos and Twitter posts from people finding comfort while programming in VR, thought that the Meta Quest 3, which was released near the end of last year, might be just what I needed!

So I just got one and am happily writing this post from the comfort of my bed with a wireless keyboard and some giant VR monitors hovering just in front of me.

Overall, I’m loving it, just the sort of thing I was hoping for. Here are some pros and cons I’ve found with the Meta Quest 3 during my first couple days of use.


The resolution and frame rate are great, much better than the original Oculus Rift I got 8 years ago (2016). That was fun for a bit of gaming, but the resolution was too inferior for any sort of virtual desktop work, and the VR sickness was pretty intense.

With the resolution doubled since then, and improvements made to the lenses (the field of view does not seem quite as wide now), virtual desktops are now usable. It also seems to help with VR sickness. I have explored a few virtual worlds and have experienced no VR sickness whatsoever!

Another pro is that it does not need to be connected to anything. It’s a standalone unit. It also doesn’t need an external camera for positional tracking (as the original Oculus Rift and the PSVR do), and the tracking is pretty much perfect. I can even connect it to my computer for a virtual desktop all through Wi-Fi. This is a great convenience.

The “passthrough” is excellent. The unit has cameras on the front, allowing me to basically see through the unit (albeit at a lower resolution), so I can see my hands, my keyboard, my cat, etc., with no problem. I can even walk around the house with no problem!

I have been especially impressed with VR videos on YouTube, of which I’d love to see a lot more. Not the flat 360-degree videos which just put you in a big flat sphere, but the 180-degree 3D ones, that make it look like people and places are just in front of you. In fact, I’d really love to see an entire movie or play in VR. I would definitely love to even get a VR camera and shoot some stuff at some point.


The major problem with the Quest 3 is that it is very uncomfortable for me. It comes with simple straps that sandwich your face, the main unit pressing against your eyes and cheeks. It’s made worse for me by my need for glasses. I can wear them in the VR, but, although it improves my view of the VR world, it’s just something else pressing into my face. It’s extremely annoying.

Hopefully this problem can be helped with some accessories, which I’ve purchased but which won’t be delivered for a few weeks. First, I’ve ordered some custom lenses so I’ll be able to see clearly in the VR without having to keep my glasses on. I’ve also ordered a halo strap which should, like the PSVR (which is by far the most comfortable VR headset I’ve yet tried), take the pressure off my face by transferring the weight of the unit to my head instead.

Another con is that, like the Oculus Rift, it gets a bit warm, which is annoying when it’s pressed against your face. Hopefully a halo strap will also help with that.

The unit has a short battery life, around 2 hours, which I’m sure will only get worse over time. I’ve only had my unit for a couple of days, and I’ve already drained the battery three times. I guess I could just keep it plugged in? But that’s a bit of a nuisance. The halo strap I ordered comes with a battery pack, so that should definitely help.

Another con is that the resolution could be even sharper; although it’s now good enough to use virtual monitors, text is still somewhat fuzzy, and there is still some aliasing and shimmering going on. Hopefully in another decade we’ll have even higher resolution VR sets? I still don’t think I’d watch a movie in here; even though I can experience a giant virtual theater, I enjoy the higher resolution of the real world for movies and TV. (Also, the Netflix app for this thing is terrible, it streams at too low a resolution with too much compression.1)

One last con is that the unit is kind of… smelly. It doesn’t have that new plastic computer smell, which is the stuff dreams are made of. Instead it just smells kinda weird, almost like body-odor. It’s admittedly slight, but it’s annoying. Hopefully it’ll go away eventually, but until then I guess I can always light scented candles or some dragon’s blood incense.

(Now I have to write the rest of this post outside of VR, because I drained the battery again.)

The Metaverse

I’m still not at all sold on the whole “Metaverse” concept. Perhaps I’m too much of an introvert, but I don’t see the appeal of exploring a virtual environment with a bunch of strangers’ avatars wandering around in front of me with random chatter from random voices all over. If they were people I knew outside of VR, it could be a fun and interesting experience, but I just don’t want to explore VR worlds with strangers. Sorry strangers. Sorry Mark.

Desktop Use

Right now, I’m using the “Immersed” app, which allows you to cast you computer monitors to VR, and allows you to add additional virtual monitors. With programming, it’s very useful to have at least two: one for the coding, another for seeing the running results. It should be useful to have even more screens to pull up documentation and other resources without having to shrink and hide windows.

Right now I’m just using the free version of the app. I’ll probably try the paid version when my accessories eventually arrive to see if it’s worth the upgrade, but the free version is probably all I need.

Since the visual info is streaming through Wi-Fi, there’s no need for cords, but it does drop frames every now and then, so it’s probably not great for watching videos from the desktop or playing PC games. For that, you’d probably need to physically connect your computer to the VR, which I have not yet tried.

Overall, the Meta Quest 3 gets a big thumbs up for me, despite its cons, which I hope the accessories will help with.

Solar Eclipse!

My parents and I travelled up to Erie, Pennsylvania this weekend to see today’s total solar eclipse. (We missed the 2017 one.) It was awesome! It was a very cloudy day, but fortunately the clouds thinned out enough that when the eclipse reached totality, we could see the “diamond ring” in its full glory. Very interesting to see the faint shades of color around the edges of the moon. The rapidity with which the whole sky becomes dark and light again before and after totality was also awesome to see. I generally hate traveling, but this was worth the trip.

(It was also nice that our hotel gave us a free upgrade from a normal dinky little hotel room to a double bedroom; more spacious, and I got my own room!)

I didn’t spend much time trying to get a good picture as I preferred to just focus on the experience. But here’s the partial before totality, taken through the filter of the eclipse glasses:

And then here’s a terrible picture of the total eclipse as shot through my phone with default settings, blurry and crappy:

There’s really not much else to do around here in Erie, PA. We went to see the shore of the great lake yesterday, and tonight I want to try seeing the new Godzilla x Kong movie at a nearby theater in 3D with D-BOX haptic movement seats, which I’ve never tried; we don’t have any back home.

Prayer to St. Michael with Suno AI

I turned the Prayer to Saint Michael into some epic choir music with Suno AI:

It would have been a lot easier for me to learn my prayers as a kid if it had been so easy to turn them into music.

I actually wanted the whole prayer to be sung by the entire choir, but Suno AI seemed to insist on featuring a solo vocalist for the second part (“May God rebuke him…”), as you can hear above. I also had to try quite a few times to get it to pronounce “wickedness” clearly and correctly; it kept wanting to sing “winess” or “wicks”. But I like how it ended up.

Here are some other versions it come up with, though I didn’t quite like any of them as much as the above.

V3 with the little “….amen!” at the end sounds almost comical.

Anyway, I’ve been thinking about posting some lyric videos of my Suno creations to YouTube. I made the St. Michael video above with Shotcut, but that seems impractical for a video with changing lyrics. Perhaps if I can make a template in Blender, I can use that. But I haven’t played around with Blender in a long time, and I don’t want to spend too much time on it… something to play around with later this month.

For now, it’s almost time for the 2024 eclipse! Though the weather might not be so good… we’ll see…

Fun with Suno: AI Song Generator

Wow, this is my first blog post of the year. That’s pretty sad.

This week I’ve been playing around with Suno, an AI song generator. As far as music-generating AI goes, it’s definitely the best I’ve seen so far, as it actually generates melodies, which is what most musical AIs stink at.

Of course, it’s got its weaknesses, but this is new tech, so that’s to be expected. And I haven’t seen competition that really does anything similar yet, though I’m sure that will come.

Anyway, here are some of the songs I’ve generated with the app. You can have it generate its own generic lyrics, but I find it more interesting to provide my own.

The first three are symphonic metal, one of my favorite genres. Maximus is an epic choir singing in another language. A Song Unsung and The Road Inside are some relaxing indie folk. The Owl and the Dragon is a folk-ish lullaby. A boys’ choir sings The Crystal KnifeAbout the Cats is in the style of a generic 90s pop song. Finally, Boop! is an Irish folk song with nonsense lyrics. Links to the lyrics for each song can be found at the bottom of this post.


Perhaps the biggest weakness is lack of control. Other than providing the lyrics and style, you don’t really have much control over the details, which you’d likely want if you were a serious composer or songwriter.

Styles are also limited; I asked it for the style of a Russian folk song (“The Owl and the Dragon”), and it just gave the singer a Russian accent.

The format is limited. For best results, it seems good to stick to four-line verses and chorus, from which generates standard generic 8-bar melodies.

It’s text-to-song isn’t perfect. Sometimes it repeats a syllable, ignores a syllable, or puts emphasis on a weird syllable. Sometimes it will sing a line from a verse as though it’s part of the chorus; its “parsing” makes mistakes.

Sound quality is another weakness. You can probably tell from the examples that it outputs some pretty low-quality sounds, especially with the bombastic symphonic metal, which can sometimes make the lyrics hard to understand. But musical sound data has even more information than images, and image AI generators themselves still output a lot of noise. With images, however, it’s easy to discount the noise as texture or something. With musical sound, noise gets in the way; with professional recordings (especially if you’re an audiophile), we’re used to hearing nice clean sounds; even the hissing high frequencies of cymbals matter to a degree.

In some output (not the ones I’ve showcased here), I could swear I could hear overtone artifacts of other words or singers faintly in the background; I’m guessing the AI is doing diffusion with frequencies / Fourier transforms, and generating little fragments of training data it should be ignoring. Or it could just be weird auditory illusions.

Is it useful?

Given all these weaknesses, is Suno a useful tool? Honestly, it’s probably not super useful for professional musicians yet, perhaps other than a quick and easy way to get some ideas. Otherwise, it’s perhaps still more of a toy at its current stage.

Granted, such a musical toy can still be a lot of fun, and I’m excited to see the app develop further. I’m not sure who’s behind it or even what country it’s from, but I do hope they don’t get bought out too easily.


What about my own music AI, the development of which I’ve been procrastinating on? Has Suno beat me to the punch?

My approach is a lot different as I’m not really dealing with the sound of music. My focus with TuneSage is more about the actual notes and musical structures of a piece.


Here are links to each song on Suno, where you can see my profoundly beautiful lyrics:

Close Your Eyes
A True Heart
The Shadow Age
A Song Unsung
The Road Inside
The Owl and the Dragon
The Crystal Knife
About the Cats