It’s just a temporary page. One can sign up for the mailing list, but until I start actually releasing sample work or something, it’s probably not good for much. Better than nothing, though.
I have no idea what the look and feel of the final site will be; I am not a graphic designer. However, I really like the font I found for the title. I was originally trying to reuse the letters I created for Insane Fantasy…
Very similar! An uncanny resemblance, especially the ‘E’ and the ‘S’. I kind of prefer my curved ‘A’. I tried to make a ‘G’ to match the style, but I just failed terribly, it looked horrible. So I’m glad I was able to find something very similar.
Anyway, be sure to head over to TuneSage and join the mailing list! (If you want.)
Startup School 2019 has begun! Today we basically just got the orientation video (which is not private, so I can embed it):
They tend to make all the lectures public on YouTube, so I should be able to embed them all as they are released! And there’s still time to register, as they mention in the video. Looks like the meetup for the DC area is on August 14th! (Or the 24th? Website and video don’t agree.) I hope to make it there.
They mention that the weekly updates should include some measurable metric. Since I haven’t launched yet, I suppose my metric will be “weeks until launch”, which I am nervous to estimate because things always take longer than you think they will. However, here is my initial estimate:
To do list:
Set up home page to tease potential users, collect emails (today)
Get algorithms and GUI to a usable state (4 weeks)
Create user log-in system paired with payment system & user forum / guides (2 weeks)
Incorporate the company (1 day) and launch!
So six weeks until launch! Good luck to me. (No promises, obviously.)
One of my initial concerns is: should I limit growth to make sure the service can scale? I guess it’s too early to worry about that though.
This year, Y Combinator’s Startup School is open for everyone to register, and I’m hoping to participate. As they say on their blog:
Today, we’re opening up registration for Startup School 2019, our free online course for founders looking to get help turning an idea into a startup. The 10 week course will begin July 22, 2019 and is free for everyone to participate.
They’ll also be granting equity-free $15K grants to “the most promising companies that join and complete the course.” (I still hope to apply to the core YC program, but the possibility of a $15K grant if I don’t make it would surely be nice.)
They’ll also be hosting meetups / events around the world, one location being Washington DC, which I’ll try to make it to. (I just hope it’s not on Tuesday, September 10th, as I’m going to a Kamelot concert that day. Or near the end of August, as I’ve got a sibling’s wedding to go to.)
My startup is the AI-powered music generation web app I’ve been working on, now tentatively titled Tunesage. (Can you think of a better name?)
I was hoping to finish a prototype of the web app by the end of this month (July 2019). I’ll still try to, but I’m also giving myself an extension until September 25th (the deadline to apply to the Y Combinator Winter 2020 batch) due to circumstances beyond my control (such as a sibling’s approaching wedding and my parents deciding now is a good time to redecorate parts of the house).
So that’s what I’m up to. I’ve also been learning the programming language Rust as I hope to use that on the music app’s back-end.
As the 2020 election approaches, we’ll probably hear more about the idea of “universal basic income” from politicians. And it can sound tempting for two main reasons. Reason 1: Free money! Yay! Reason 2: Technological innovations will put people out of their jobs, whatever will we do?! (Answer: Free money! Yay!) (And perhaps Reason 3: I can show compassion towards the less fortunate without having to do anything but vote! Wow, that feels good!)
But it won’t work.
My viewpoint is this: What is money? What does it mean, what does it represent? Ultimately it represents a person’s labor1, another person’s value of that labor. (A product you buy or don’t buy is the product of people’s labor. Even if it was made in a factory. Even if that labor was in the past. That’s really what you’re paying for.) Its value is not arbitrary. It is completely psychological, and collectively psychological at that. It is determined by the countless economic exchanges people make everyday. What is a dollar worth? It’s worth whatever the holder of that dollar is willing to exchange it for, and what someone else is willing to trade to get it.
In other words: THE VALUE OF MONEY IS DEPENDENT ON ITS DISTRIBUTION. Its value cannot be dictated by some authority other than the countless economic exchange decisions people make, because the worth of a man’s labor cannot be dictated by some authority. You can’t just redistribute it with no associated exchange of labor (abstract as that may be) and expect it to retain its value.
This is the biggest and most dangerous flaw of logic so many people seem to make, thinking that money could forcibly (that is, through governmental force rather than organic economic incentive) be exchanged and retain its value. Why / how would it retain its value?!
So when money is exchanged without any associated exchange of labor, as would be the case with universal basic income, you break the game. You devalue money. It logically doesn’t work because the money no longer represents an exchange of labor (or anything at all for that matter). This means the money won’t be spent as though it is. This means the “worth” of whatever the person buys with their “free money” is warped for everyone. Ultimately you just get a rampant cycle of inflation along with the devaluation of needed labor.
This is also why minimum wage sets “by force” (law) doesn’t work2, at least not long term; because wages are not then economically organic, and you actively incentivize businesses to innovate and replace the now costly employees or go out of business. The idea that the wealthy CEOs at the top will just shrug and swallow the loss and devalue their own work is ludicrous. The idea that shareholders of profitable companies will just snap their fingers and say “ah, shucky darns!” and devalue their own investments is ludicrous.
Also note that this has nothing to do with tax (“we can tax production instead of income!”) or issues of “so where does all this free money from?!”3 It doesn’t matter. It’s the act itself that’s the problem, the act of giving people money for nothing. The exchange is meaningless and so the money is meaningless, and so every economic exchange that ripples from the spending of that free money is devalued.
Granted, it’s difficult (if not impossible) to measure this devaluation, as it’s purely psychological4. But that shouldn’t be controversial, because the value of money itself is purely psychological to begin with.
I also thought the video below was an interesting perspective. Jordan Peterson comes at it from a more personal psychological point of view. He says that the idea of “universal basic income” tries to rectify the wrong problem. The problem is not that people lack money, he says, but that they lack purpose. A person without concrete purpose will waste their money, essentially, so it doesn’t solve their problem. “Provision of money without purpose is not helpful.” Money without meaning will do more to hurt an individual than help. “You don’t want no responsibility,” he says.
Makes sense. And so I think he sees the other side of the same coin. Money is psychological. Unearned money is not spent like earned money. This creates both personal and economic problems.
Of course, economic problems already exist. Social security, welfare, government bail-outs, spending waste, national debt, forced insurance (healthcare!). They devalue money (or labor) in one sense or another. But the system doesn’t bear these “cheats” because they somehow actually work, the system works despite them. It can be like saying, well, the camel is still standing, what’s another little piece of straw? Aside from already not moving as fast as he could, the camel is doomed to collapse if you keep adding weight to his back; that he hasn’t collapsed yet is not somehow evidence that he will never do so, especially when history is full of the graves of crushed camels (that is, socialist nations). And universal basic income would not be another little piece of straw, it would be boulder.
Musical artist Radical Face, one of my favorites, recently released a new EP: Therapy. It’s great stuff, catchy melodies, memorable lyrics. While listening to the third track, “Personal Giants”, a simple four-note phrase that appears at the end of the main melody caught my ear. You can hear it first appear at about 12 seconds in:
Just those four notes there. “And kept the light…” And again at 30 seconds in. “You told me time…” Sounds like a simple ascending major triad, with a minor chord on the second beat. Something like this:
This simple phrase stuck out to me because it reminded me of one of my favorite film scores, James Horner’s score for The Land Before Time. The “Great Valley” theme begins with a similar phrase, an ascending major triad with a minor chord (iii?) on the second beat. You can hear it enter at 3:07 in this track:
Other than those four notes, the melodies are quite different. But to me they’re memorable enough that hearing them in Radical Face’s song immediately conjured up images of Little Foot and rocks and a great valley and dead cloud dino Mama beckoning… And the lyrics in “Personal Giants” perhaps could apply to Little Foot. “To me you’re a giant, some distant lighthouse” … maybe a stretch, but it could work, yep yep yep.
So then just the other day Radical Face does a livestream Q&A, and what does he say at 37:07? Behold…
“Ooh, I love movie soundtracks. Some top ones would be, I really specifically adore The Land Before Time soundtrack by James Horner. I think it’s so good.”
Aha!! You see?! Clear and undeniable evidence of musical influence here! And only I understood, only I could see the secret of those four notes, only I made the connection, haha!
By the way, one of my pieces also features some clear and undeniable influence from The Land Before Time soundtrack, if you can find it…
I got the soundfont to work, or at least to work well enough for my prototype-creating purposes. It will need some fine-tuning in the future, but if I can manage to actually turn this software into a business, it would be nice to create a custom soundfont for it anyway.
I’m now almost to the point where I can start using this software to actually write some music, but I’ve still got a number of controls and GUI elements (buttons and stuff) to program. I need to add the abilities to do mainly the following:
add and delete measures
edit note / track variables such as
release time (how long it takes an instrument to fade away after it has stopped playing)
volume / velocity
stereo position (left or right)
edit reverb settings
save and load files
export and load MIDI files (depending on time; this feature isn’t too important yet)
export MP3 or WAV files (at least look into it; if this is too time-consuming, it’ll be something to look into in the future)
I think that’s mostly it. And none of that stuff (save for perhaps the last one) should be too terribly difficult to code. So I think I can get it done this week!
After that, I will probably be a bit more secretive as I begin adding the “secret ingredients” which are my amazing world-class AI music generating algorithms, which will be the secret sauce of the business. For that, I will probably have to buy a dedicated server (or VPS), as those algorithms will be executed server-side. That’ll be fun.
Hopefully I’ll also be able to use this editor to actually compose some new tracks this April. I owe my few Patreon subscribers probably around a dozen or so tracks, and I want to get that new album out, which just needs one or two more tracks. And it would just be a good test of the software, even without the AI features, to see what composing with it is like. 122 days left!
Oh, what exactly will constitute success come July 31st? I mentioned earlier that success will mean that the software will either be at a point in which it’s ready (or close-to-ready) to actually market and sell, or in which a working prototype is ready to show to investors. Of course, those possibilities are not mutually exclusive, but at least one must be the case. But what does the latter mean? What will make it “ready” to show?
Anything really, so I can’t lose!
Seriously, though, it will mean that the software should be able to auto-write a complete song (minus lyrics) on its own. That’s melody, chords, orchestration. The algorithms are done, it’s just a matter of making them usable to an end-user and making their output as good as possible.
I’d ideally like the software to be able to compose something with the complexity of a Mozart symphony. That would be the true peak of Parnassus. And I’m positive we’ll have that soon enough. Maybe not by July 31st, but it would certainly be awesome, no?
For the past week, I’ve been trying to give my music editor1 the power of sound. I looked into the new Web MIDI API standards, but those are more for sending and receiving MIDI messages, not playing sound, so that’s no help. (Though it may be something to look into later for other features, of course.)
So instead I’ve been looking into the Web Audio API, which does the trick, and has mostly what I need. Actually, it has everything I need, but not everything I want. I want the sounds to sound as good as possible, which means the instrument samples must loop for sustains (as a MIDI synth would).
First I experimented with MIDI.js‘s implementation of sample playing. With pre-rendered soundfonts, I could easily play samples for all the basic MIDI instruments. Problem with this implementation is that the instruments don’t loop! (Or at the very least, they don’t seem to read in the looping data saved in the soundfont.) Instruments such as strings, which can sustain indefinitely, really deserve some decent looping.2
So I moved on to experimenting with a library called sf2synth.js. I can’t understand the Japanese comments (the developer seems to be from Tokyo), but this implementation seems to load in soundfont files much more completely, and actually reads in and uses the looping data! Woohoo!
But even it has a problem. When I play a note from the Musyng Kite soundfont (which is the soundfont I’m currently using for experimental purposes) in the Polyphone Soundfont Editor (which is a great piece of software), it sounds great. But when it’s played back in the browser through sf2synth.js, it sounds more bland.
Here is what I think is happening…
If we look at a preset in Polyphone, we can see that it’s actually made up of multiple instruments; below you can see that “Strings Ensemble” is actually made up of 8 layers.
To me, it sounds like sf2synth.js is only playing one of these layers, instead of all of them like a true soundfont player should.
So my mission for this week is to dig into the sf2synth.js code, try to understand how it’s loading and playing sounds from the soundfont file, and try to give it the ability to play all the layers in a preset that it should. Polyphone is open-source, so I can also dig around their code to see how they’re loading in and parsing / interpreting sf2 files.
I probably only want to spend two weeks max on this; if I can’t figure it out after two, I’ll just have to settle for suboptimal sounds and move on. I can always come back to soundfont programming later. It’s more important to get a working prototype finished by the end of July. 129 days left!
Progress on my web-based MIDI editor / animator has been slower than I’d like, but isn’t that always the case? At the moment, I’ve got the basics I want; you can add and delete notes, copy and paste, create and delete tracks, hide and show tracks, and edit track colors. Still need to allow you to add and delete measures though.
I’m writing this blog post on my phone with a bluetooth keyboard and the WordPress app for Android that I’ve never tried before. So far, pretty good. My LG G6 seems more responsive that the old iPad I tried using before. Unfortunately the screen is significantly smaller on my phone, but I will manage.
So I’m continuing to work on that music-generating software that I hope to turn into a business. My deadline is July 31st of this year (2019). By the end of July, the app must be presentable, either to advertise it and open it to limited paid beta-testing, or to seek interest from investors. (Or both, I guess.) That gives me about 5.5 months to build the first version of the app. If the app cannot be completed by that date, it will have to go onto the back-burner. Because money. Can’t afford to spend the entire year tinkering with it if it will need a significant more amount of work to be presentable.