Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
I think of the concept of sound identification as more of an involuntary function of the mind.
Average non-musicians can hear chords, the difference is a musician takes that same info and processes it further in a different, "analytical" part of their conscious mind.
Like a pet dog who suddenly perks his ears and runs to the door to greet his owner because the dog recognizes something about the sound of his owners car, and the dogs mind can instinctively pick up and isolate that sound from a background drone from a busy highway full of cars running past his home.
Analytically, that musical drone sound that include chord notes may sound like a single sound to the listener. But I think if you did an experiment and started to replace single note frequencies in that drone sound with note frequencies for another chord... Even one note. It might change how the listener hears the drone sound. I think the mind will unconsciously expect note shifts from that particular drone sound because that's what happened in the past.
It's like how people can hear just a few notes from the beginning of a song, and anticipate what will come next from memory.
I'm convinced that prediction of what comes next is a large part of what makes music enjoyable for people to listen to.
It's why many hit songs are based on a simple, repeating structure that allows the listeners mind to quickly anticipate what will happen next in the song.
I recall reading research into pop music, and a correlation was found where many hit songs had fewer notes and more repetition on average.
Like...
"Happy" by Pharrell Williams, and "Can't Get You Out Of My Head" by Kylie Minogue
Play any chord through an oscilloscope and it’ll produce a waveshape unique to that type of chord. Just make sure each note in the chord is a sinewave to produce the simplest waveshape possible for the chord. Bingo.
Well, that’s no help. That’s a representation of what is. That’s not a description of what is.
Please yourself. You were the one trying to draw pretty curves...
Intersections
CHORD FIGHT!!!!1!
Sorry, I don't have any content. Well, except that seeing chords as "monolithic" seems wrong when considered from a bass-line point of view, I think, as when trying to create a walking line that works with a progression of chords. I think you want to see it as a set of notes then. But my brain is not big when it comes to this.
Maybe I misunderstand and don’t see the whole point of this thread but IMHO the least information required to describe a chord is the actual name of the chord. Granted the name of the chord does not tell you how to voice the chord.
Which is the difference between the line and the frequency ratios?
Because the X/Y graph doesn’t add any kind of valuable information.
This is exactly the same as [1,3,5] for major cord
The X coordinate is “nothing” here.
In fact, this graph is a bullshit indeed because you’re representing a continuous variable. A big mistake
That is true - I’ve probably got too many axes altogether. Yep, those diagrams have too much redundant variable stuff all over. Ignore them, they’re unhelpful.
Okay, imagine this as a scenario:
Pretend we have a modular synth, and it has a VCO which we give pitch information to in the form of a control voltage. So far, normal. Now imagine the VCO is a bit special, it’s an additive arrangement so that it can produce not only the fundamental (the pitch you’ve specified with the CV) but also sideband harmonics which amount to a chord. Now you’re varying the root note, but getting a chord’s worth of frequency distribution – the same partials that would constitute a chord are now present in the VCO because it is an additive generator. You can’t tell the difference, you think a chord is being played, because, well, it is. Except all you can do is go up and down in pitch, the whole lot gets transposed as you play along with the root note. Same effective chord pattern, just going up and down. You haven’t changed t👍👍😀😀😀😀😀😀😀😀😀😀
Fucking hell why is this happening.
You haven’t changed the chord type, all you’ve done is shift it up and down.
We’re still in our modular analogue synth now, we have a CV for pitch. What additional information do we need to specify what chord type? Like I say, I’d be highly surprised if you can do it with a single scalar value (ie, another CV that somehow according to the voltage specifies ‘chord type’). What do we need, then? Can the chord characteristics be embodied in a pair of values? Three? Four? I kind of intuitively suspect it is a spline, so we’d need perhaps four values (which you’re thinking is about the same as specifying the discrete note numbers, so why bother with this complex scheme?) to specify the spline shape (where the pitch CV acts as the starting anchor anyway, so we only need three additional values).
One advantage, if this could ever come to fruition, is that it would be possible to change chords by integrating the values from one to another smoothly – shift from major to minor by going through all the infinity inbetween along the way. The bezier specifies the characteristics wherever it intercepts the grid of notes (well, not a grid but a set, I suppose - a 1 dimensional row). We’d also drift through impossible chords or undiscovered chords along the way, or outside of those limits.
I don’t know, like I say, I’m very vague about this because it’s a far away idea which I’m not near enough to see detail and don’t know enough about, which occludes a lot of it to me.
I’m confident one of you can explain to me what I mean, because I certainly don’t understand it.
I’m pretty sure it would be possible to define an oscilating function which generated appropriate intervals at zero crossings to generate chords. Most chords are based on thirds, so to take the simplest example with just three notes, the difference between the major and minor chord is just the middle note, so by lengthening or shortening each half of the cycle, you’d be able to switch between the two. You’d need to decide whether you quantised the result, so the chords changed as for a piano, or whether it is continuous, as could achieved on a fretless instrument, or by bending a note.
But the maths involved seems way more complex than just giving a series of intervals. So any major chord could be (X,4,3) where X is the root note. You could substitute frequency ratios if you wanted to consider the continuous case. This would be much more concise than trying to fit a continuous curve to all possible chord variations, especially when you get to all the variations of 7th, 9th etc. Plus you’d need some kind of parameter to define whether or not those extra notes are actually sounded. So although you are fighting against it, intervals seems to be the way to go.
@u0421793 Thanks for the example with the modular synth approach - for me it‘s now much clearer what you were taking about in the previous posts
If you want to be able to interpolate between known chords through the ‚undiscovered‘ harmony space, whatever we’ll come up with can‘t be the integer semitone intervals between the notes of the chord, but probably the fractional frequency ratios. Since there are several of these ratios in a chord, and chords are composed of different number of notes, the single forumlar needs to generate one, two, three, four or five output values depending on the input and for each of these values run though the freq ratios for that specific - while for many given single input values also reach specific combinations of these ratios.
The main problem is that there is no such thing as ‚order of chords‘ , their harmonic impression differs depending on context to previous chords.
Sounds plausible so far
Here’s another spanner in the works, a picture from
https://hubblesite.org/contents/articles/spectroscopy-reading-the-rainbow
These are (very tenuously) analogous to chords, or chord types. I wonder if there’s a way of describing those distributions without discretely referring to those distribution hit points themselves.
And there are probably different ways of interpolation between these spectra - ie which line moves where.... (one needs to compute the minimal movement needed).
With these spetra you run into the same problem as with music chords - there is no vaild single dimensional order. Hmm, the periodic table is a possible 2d order, but with 1d you‘ll get jumps between the element types
I’ve also thought that much of what we think is related to pitch (chords) is really what we call timbre (distribution of harmonics over time). As someone pointed out, however, there is a big way in which chords are not just the stacks of pitches that we know today but rather abstractions that represent moments of linear, melodic movement. For many people (especially those trained in Schenkerian theory), chords don’t really exist, they are only the result of voice leading, melodic movement across various voices (counterpoint). What matters is that you go from a tonic to a dominant and back. Linear movement, not vertical entities. But admittedly this applies only to tonal music, so that’s a different discussion.
As for the oscillator you describe, have you tried the Chords mode in Spectrum (Mutable Instruments Plaits)? If I understood you correctly, (not sure I did), that mode is very close to what you’re imagining: The first parameter controls the quality of the chord: major, minor, the beautiful 6/9, etc. The second parameter controls the inversion (the order in which the pitches are stacked). That’s enough to describe the chords, so the third parameter just controls the timbre. I found this sketch of all the chords, which I think I got from the website of the musician Lightbath who designed the chords for MI (I might be wrong about this).
. In this diagram, there are only 8 chord types or qualities (the first two are just octave and fifth, and there’s no astonished), so they’re not “all the chords,” but it’s the most important ones in most of tonal music, and you could still program as many as you want in your imaginary oscillator. So there you go: two parameters, one for quality, one for voicing. I still don’t understand what this has to do with curves or functions, but maybe it helps.
Curiously, I’ve never been able to use this mode for anything other than drones because I (have been trained to) dislike the sound that results from the kind of parallel movement you get when you transpose stacked chords (by changing the pitch of the whole thing without changing quality/inversion). Lightbath (or the person who designed the chords) was careful to arrange the chords and voicings in such a way that consecutive chords share as many pitches as possible to create smooth movement, but there is no easy way to create more meaningful progressions other than using MIDI automations. The wavetables in it, on the other hand, are lovely.
Another approach that you can look into is something called Neo-Riemannian theory. I’ve never spent much time trying to understand it but it can help, especially if you are more mathematically-inclined than I am. Just leaving their Tonnetz here to spike your interest:
https://en.wikipedia.org/wiki/File:Neo-Riemannian_Tonnetz.svg
@dvi Neo-Riemannian theory: Ah that‘s where Navichord got the idea for its keyboard layout
The Midi communication protocol works pretty well at packing chord data using a minimum data space.
But I'm not sure what this seemingly ambiguous thought experiment is attempting to discover.
If I had to design an analog system for transmitting chord data.. Maybe look at frequency-division multiplexing used in old analog telephone systems. I think if you put an oscilloscope on a pair of copper wires transmitting multiple chord voices using multiplexing, you'd could get a snapshot at any instant of a waveform that contained all the note voices in a multiplexed frequency spread.
It's an old known technology, so it shouldn't be hard to implement.
Midi would be a better system IMO.
Yes probably! Lotes of untapped resources for apps in those hardcore music theory sources.
The theoretical modular unit already accepts pitch in the form of a v/oct signal. There’s no reason to assume that there can’t be four, five, six or seven individual oscillators in the unit. Also internal to the unit are n slew limiters, one for each oscillator. Let’s keep it simple and we have three other inputs, one for major/minor, one for 7th and another for inversion. The unit contains a controller that takes the input cv and generates two or three (7th) additional cv signals based on the major/minor/7th inputs and then increases one, two or three of them by +1V based on the inversion input. The voltages are intelligently assigned to lanes that run through the slew limiters to the oscillators. Or you could do this in a separate module that generates separate cv outputs. Something like https://github.com/Strum/Strums_Mental_VCV_Modules/wiki/Chord?
As others have alluded to, what you're looking for is likely best described within the framework of musical set theory. If we ignore voicing for the moment (i.e. CEG is treated the same as GCE or CEG in another octave), a chord or scale quality (pitch class set) can easily be encoded in a single integer. For example, take an AM7 chord containing A, C#, E, G#. Each pitch from C to B can be assigned an integer from 0 to 11. Then A, C#, E, G# becomes 9, 1, 4, 8. This doesn't really compress the data in any way, but you can then treat these values as bit shifts and construct a single number as follows: start with an empty 12-bit binary integer and turn on the bit corresponding to each note. For AM7, we get 001100010010 or 786 in decimal. Thus the AM7 is uniquely represented by the single number 786. If you just want chord quality (M7, without specifying a root), the space of possible integers can be compressed further.
This is where the idea of a "prime form" comes in – a lot of entities which are equivalent in some way can be reduced to a single representative label. The traditional set-theoretic prime form discards useful features like inversion, but it's generally not hard to construct an alternative which preserves these traits (depending on which you care about).
Alternatively, you could expand out to a 16-bit (or more) integer and encode more data (e.g. the root note or potentially voicing information). In this manner it is definitely possible to encode chord type in a single scalar value, though again if you want more specific voicing information you would have to change the encoding, and at some point just listing the MIDI notes is more efficient.
One could argue there are no impossible or undiscovered chords/scales (at least in traditional, 12-TET western harmony) as all 352 (treating those which are transpositions of each other as identical and including the empty set, see https://oeis.org/A000031) have been catalogued in terms of Forte numbers. If you treat the pitch space as continuous rather than discrete, however, you could get an infinity of chords. If I'm interpreting correctly though, at least for the case of major and minor, all that would change would be the tuning of the third (since major and minor still share the same root and fifth). You would need a different mapping/space to get a more interesting path.
However, it might also be interesting to modulate between chords while keeping their discrete nature. This isn't really integration, more interpolation. One idea that jumps out to me is to encode chords as points in some higher-dimensional space and draw a straight line between the chords you want to travel between, listing the closest points/chords to that line on the way. The mapping you use to embed each chord into the space would determine the sequence of chords you get. A natural mapping would be a 12-dimensional space where each axis is either 0 or 1 depending whether the chord contains the corresponding pitch, but then you get a 12-d hypercube with each chord sitting at a corner, and the path between any two chords wouldn't actually intersect or even pass by any intermediate chords.
A simpler mapping would be in a 2D plane, with one axis representing the root note (12 gridlines) and the other representing chord type (352 gridlines). You would get something like this:
This would of course have a lot of chromaticism and not be overly musical. Moreover, it would probably contain more chords than you'd like in a leading between CM and FM7.
However, we can definitely improve the embedding. A more musical approach might be to list root pitches in a cycle of fifths, so that adjacent points don't give a chromatic leading. Other ideas might be to sacrifice some level of detail (instead of all possible chord types) for the sake of a smaller space, or to switch to a 3-dimensional space and add an axis for inversions. There are infinitely many ways to approach this problem and I'm sure at least one would give interesting-sounding results.
This kind of stuff always interests me, especially taking atonal set-theoretic concepts and warping them to fit more traditional musical sensibilities, so let me know if you have questions or ideas.
I like the way busy works beats breaks down chords
Sign up to get free chord code book.
https://www.busyworksbeatsproducts.com/musictheory2
There’s tons of chord formula charts available in pdf with a quick search, but I like the way he does it with the Zero.
Stuff like this
Depends how detailed a description you need.> @JudasZimmerman said:
Yeah, that's a pretty key observation. A chord is a cross-sectional slice--which is killed in the taking--of a dynamic process. Naming slices is useful for the student, but it's harmony conceived at a toddler level compared to mastery of harmony. And having mastered the motion, one doesn't even need to think in terms of slices anymore, much less know the names of the slices--all that becomes irrelevant unless you need to communicate a framework to another player.
easiest dedcription of any chord:
"just fucking hit any number of notes between 2 and 8 (including) until it sounds pleasant to yourl ears"
What if we don’t want pleasant? Is it still a chord? Is a harmonically complex single-note sound that happens to have high enough harmonic peaks in other areas actually a chord (ie, a dual VCO synth where the VCOs are tuned to a fifth, or a 3 VCO synth where they’re tuned apart as root third fifth)? Is a harmonically complex single-note sound that just happens to invoke a lot of peaky sideband stuff that you’d mistake for lying at harmonically meaningful intervals actually a chord? Is a single-note sound that is pretty much filtered white noise actually a chord? Is an explosion a chord? A cannon fire? A fart? A cough?
If we only qualify harmonic combinations that are pleasant, and exclude a lot that aren’t pleasant, that’s a bit unfair on the whole chord world.
And pleasant to whom? What if they change their mind?
This is so much fun... everyone that tries to narrow the answer to a precise criteria gets a more expansive problem statement that even includes the infamous "raspberry" (the Fart chord).
Computer scientists measure the complexity of a problem using this language:
Your question and the solution is NP-Hard. The idea of a "minimum" for something so vast is an exercise in contemplating the infinite which can conveniently be found in the lower portion of your abdomen.
This chart does an excellent job of making the concept even harder to understand. It looks like a basketball playbook to me. Your problem lives near the net.
https://forum.audiob.us/uploads/editor/hn/0a2cmwa5ncwb.png
i agree chords are just intervals. and how many notes are playing at once. and the distance between intervals change the emotions the chords produce.
now the real question is why do humans react to these interval relationships regardless of what language they speech or where in the world they come from. is it just something so deeply ingrained in human dna?
When considering a finite amount of chord types, the problem certainly isn’t NP-hard. It takes a minimum of log2(n) bits to uniquely encode each, where there are n possibilities total. Even if you allow infinitely many chord types, I’m not convinced the minimal representation is an NP-hard problem. There are tons of infinite spaces from which points can be described in a provably minimal finite representation.