Music Editor Developer’s Log: More Cowbell

I got the soundfont to work, or at least to work well enough for my prototype-creating purposes. It will need some fine-tuning in the future, but if I can manage to actually turn this software into a business, it would be nice to create a custom soundfont for it anyway.

I’m now almost to the point where I can start using this software to actually write some music, but I’ve still got a number of controls and GUI elements (buttons and stuff) to program. I need to add the abilities to do mainly the following:

  • add and delete measures
  • edit note / track variables such as
    • release time (how long it takes an instrument to fade away after it has stopped playing)
    • volume / velocity
    • stereo position (left or right)
  • edit reverb settings
  • save and load files
  • export and load MIDI files (depending on time; this feature isn’t too important yet)
  • export MP3 or WAV files (at least look into it; if this is too time-consuming, it’ll be something to look into in the future)

I think that’s mostly it. And none of that stuff (save for perhaps the last one) should be too terribly difficult to code. So I think I can get it done this week!

After that, I will probably be a bit more secretive as I begin adding the “secret ingredients” which are my amazing world-class AI music generating algorithms, which will be the secret sauce of the business. For that, I will probably have to buy a dedicated server (or VPS), as those algorithms will be executed server-side. That’ll be fun.

Hopefully I’ll also be able to use this editor to actually compose some new tracks this April. I owe my few Patreon subscribers probably around a dozen or so tracks, and I want to get that new album out, which just needs one or two more tracks. And it would just be a good test of the software, even without the AI features, to see what composing with it is like. 122 days left!

Oh, what exactly will constitute success come July 31st? I mentioned earlier that success will mean that the software will either be at a point in which it’s ready (or close-to-ready) to actually market and sell, or in which a working prototype is ready to show to investors. Of course, those possibilities are not mutually exclusive, but at least one must be the case. But what does the latter mean? What will make it “ready” to show?

Anything really, so I can’t lose!

Seriously, though, it will mean that the software should be able to auto-write a complete song (minus lyrics) on its own. That’s melody, chords, orchestration. The algorithms are done, it’s just a matter of making them usable to an end-user and making their output as good as possible.

I’d ideally like the software to be able to compose something with the complexity of a Mozart symphony. That would be the true peak of Parnassus. And I’m positive we’ll have that soon enough. Maybe not by July 31st, but it would certainly be awesome, no?

Music Editor Developer’s Log: Soundfont Insanity

For the past week, I’ve been trying to give my music editor1 the power of sound. I looked into the new Web MIDI API standards, but those are more for sending and receiving MIDI messages, not playing sound, so that’s no help. (Though it may be something to look into later for other features, of course.)

So instead I’ve been looking into the Web Audio API, which does the trick, and has mostly what I need. Actually, it has everything I need, but not everything I want. I want the sounds to sound as good as possible, which means the instrument samples must loop for sustains (as a MIDI synth would).

First I experimented with MIDI.js‘s implementation of sample playing. With pre-rendered soundfonts, I could easily play samples for all the basic MIDI instruments. Problem with this implementation is that the instruments don’t loop! (Or at the very least, they don’t seem to read in the looping data saved in the soundfont.) Instruments such as strings, which can sustain indefinitely, really deserve some decent looping.2

So I moved on to experimenting with a library called sf2synth.js. I can’t understand the Japanese comments (the developer seems to be from Tokyo), but this implementation seems to load in soundfont files much more completely, and actually reads in and uses the looping data! Woohoo!

But even it has a problem. When I play a note from the Musyng Kite soundfont  (which is the soundfont I’m currently using for experimental purposes) in the Polyphone Soundfont Editor (which is a great piece of software), it sounds great. But when it’s played back in the browser through sf2synth.js, it sounds more bland.

Here is what I think is happening…

If we look at a preset in Polyphone, we can see that it’s actually made up of multiple instruments; below you can see that “Strings Ensemble” is actually made up of 8 layers.

To me, it sounds like sf2synth.js is only playing one of these layers, instead of all of them like a true soundfont player should.

So my mission for this week is to dig into the sf2synth.js code, try to understand how it’s loading and playing sounds from the soundfont file, and try to give it the ability to play all the layers in a preset that it should. Polyphone is open-source, so I can also dig around their code to see how they’re loading in and parsing / interpreting sf2 files.

I probably only want to spend two weeks max on this; if I can’t figure it out after two, I’ll just have to settle for suboptimal sounds and move on. I can always come back to soundfont programming later. It’s more important to get a working prototype finished by the end of July. 129 days left!

Work on web MIDI editor continues…

Progress on my web-based MIDI editor / animator has been slower than I’d like, but isn’t that always the case? At the moment, I’ve got the basics I want; you can add and delete notes, copy and paste, create and delete tracks, hide and show tracks, and edit track colors. Still need to allow you to add and delete measures though.

But what I want to work on next is the sound; my editor doesn’t actually play any sound yet. I may perhaps try and utilize this javascript soundfont player: https://logue.dev/smfplayer.js/ … of course, soundfonts don’t sound nearly as great as sample libraries, but until someone programs a javascript-based VST host that a browser can use, I’ll have to settle for what I can find. Users should be able to import and export MIDI files anyway.

136 days left until July 31st!