To my eternal shame, it’s been some months since I made any decent progress on TuneSage. But I’ve been back at it in the last few weeks, trying to tackle the time-consuming problems I’ve been having. Clearly my initial plans were not practical. Here are my current plans:

The AI

I’m vastly simplifying the “AI” element. In fact, I might even stop using “AI” to describe the app altogether. It’s become an overused marketing buzzword in the last couple years anyway. Users will still be able to generate melodies automatically, of course. But the backend will be a lot less complicated.

So I’m rethinking the whole concept of musical styles. My initial plan was simple enough: feed musical examples into a neural network, have it identify styles, and then use it to help write new music in those styles, pairing it with the melody-generating algorithm I already have. But that’s just not working very well, and I’ve spent way too much time fooling around with that approach.

But what exactly is musical style anyway? For melodies at least, we can probably get similar results by simply identifying and using melodic tropes, or signatures, and avoiding melodic rarities for a particular style. And on the melodic level, such tropes are simple enough that they can be identified and implemented without needing to train anything. Instead, we can just say, “hey, melody generator, make this melodic trope more likely to occur in what you generate.” Done. Easy.

Anyway, for the sake of just getting this darn app launched and getting a minimum viable product out there, I think I’m going to ignore styles for now altogether.

The front-end

I’ve been having difficulty figuring out just what the front-end should look like and how it should work.

Firstly, the app will focus, at least for now, only on writing or generating melodies. It won’t be for composing and mixing entire pieces, not at first anyway, unless they’re extremely simple. So, because the paradigm is focused on writing tunes, the traditional piano roll view or the track view, both of which I’ve spent some time putting together, just feel too clunky for editing melodies. The whole point of the TuneSage app is to change the paradigm of composing music, at least melody-wise, so it needs a view / layout designed for that purpose.

So I think I’ve finally come up with something that might work, which I’ll reveal when I get closer to launching (or on Twitch if / when I stream my programming again).

The current to-do list

  • Front-end
    • Buttons for: create new melody, generate melody, delete melody, move melody
    • Set tempo option
    • Allow user to “lock” notes & chords to allow for regenerating only a part or parts of a melody
    • Chordal accompaniment templates (mostly already done)
    • Chord chooser options (mostly already done)
    • Export MIDI / Save / Load options
    • Melody options
      • Time signature (probably only 2/3, 4/4, 3/4, 6/8 to start)
      • Key signature
      • Instruments for melody and chordal accompaniment
      • Volume
    • Play functionality (play, pause, stop)
    • Demo settings (not sure what the limits should be yet… perhaps limited time, no MIDI export, can only create a certain amount of melodies? Also need to find a way to discourage bots.)
  • Back-end
    • Melody generation code (mostly already done)
  • Overall app stuff
    • User login system
    • Terms of service page
    • Subscription service (Stripe?)
    • Create landing page
    • Actually incorporate as a company
    • LAUNCH

I think that’s it. Lots of stuff, but should all be doable, especially as I’m going to stop fooling around so much with the backend AI stuff for now.

Categories: Tunesage

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published.

*