We have transitioned to a more modern website. Congratulations to Hannes Pasqualini for the design!
Good news if you could not grab a kit last sunday! A new batch of Anushri kits will be prepared and is scheduled for mid-december. We are handing over production to our german partner Frank Daniels and his crew, who have been providing us with the laser-cut plexiglas cases since 2010. Frank is also the designer of the Shruthi-XT controller and other cool DIY projects, and he’s been a pillar of the Mutable Instruments community for the past few years.
97 Anushri kits were available on the Mutable Instruments site on sunday evening and got sold in the 6 hours following the release (If you are wondering, kit 1 was sent to Mutable Instruments’ German Laser-Jockey – and kits 2 and 3 were built here to proof-read the assembly instructions).
Why not more, and why will it take a while for a new batch to be produced? Two factors are at play.
The first one is infrastructure. Mutable Instruments is growing, and is reaching a stage when things will have to be done differently to support this growth. What worked for the Shruthi-1 did not work well for Anushri – which has twice the part count. We worked very hard to release this first batch of kits, and realized the conditions of its production were not optimal. Production will resume once we have figured out a more efficient way of producing and distributing kits.
The second one is support. In theory, the Mutable Instruments’ policies state that builders are responsible for getting their kits to work. In practice, there are very few occurrences of builders who did not receive troubleshooting assistance to get their MIDI in icon to blink happily or their filter to scream – the only line we draw and strongly enforce is that we do not repair mis-assembled kits. A day of support at Mutable Instruments’ is more commonly populated with “If you see a 100kHz square wave at this node, it means the integrator charges itself very fast, probably through the op-amp compensation cap only, not the external cap – check for a bad solder joint on C9″ rather than “Have you checked that the power cord is plugged?” (though it happens). Furthermore, once kits get built, ideas of mods and firmware hacks crop up – all requiring expert guidance. All in all, the “support” role at Mutable Instruments is more like “product engineering – the lost levels” – and that’s why, following the introduction of a product – support can be done by no other than the designer of the instruments themselves… At the moment, I can provide support and guidance to a community of no more than 100 people, and FAQ, wiki pages, forum posts, mods documentation etc. will have to be written before things can scale up. This might seem shocking and unprofessional – “you mean the product and documentation is still in progress while people are using it?” – but this is indeed the whole point of a DIY/open-source approach, and the extreme degree of unpredictability that comes with letting everybody do their share of the assembly work. This is a very different situation from industrial assembly, in which there is only 1 design, and usually 1 single set of manufacturing conditions which can be controlled ad lib, reproduced over and over until they are mastered. The DIY/mod culture encouraged by Mutable Instruments makes the design and the assembly conditions grow in diversity with the number of users. Fortunately, I have no doubt the community will catch up and the level of solidarity between Anushri users will match the awesome sense of community surrounding the Shruthi-1.
While you wait for the return of the kits, a batch of PCBs has been reordered and will be listed on the store in 2 weeks. Cases will continue being available (you can even get some with “custom engraving/colors”:http://mutable-instruments.net/forum/discussion/2044/anushri-custom-cases/p1). Final tip: I have updated the BOM with an alternative reference for the switch which was out of stock at Farnell – the BOM now lists a reference used in prototypes which has a shorter lever but the correct PCB footprint.
The Anushri project actually has three roots. One of them was a concept for a synth “celebrating” the SSM2164, using one of these chips for each voltage-controlled element (dual VCO, VCF, and VCA), which I started simulating in september 2011 and abandoned given the complexity of the resulting machine which would be unsuitable for a kit. The second was a concept for an “analog control board” for the Shruthi-1, replacing the digital oscillators by a single VCO – a kind of upgrade to the Sidekick. Given the increasing complexity of Shruthi-1 filter boards – which relied on digital switching, and the difficulty of cramming a full VCO circuit and LFOs/envelopes in the small board space, not to mention a MIDI->CV converter to make it useful, this project did not go very far. The last project came from a small synth manufacturer who asked me a quote for designing the digital logic of an analogue synth (MIDI>CV, arpeggiator and digital LFOs). This intrigued me and I sort of worked my way through half of the code for this to see how easy it would be given that all the functional blocks were available from existing projects. All these were “scratch an itch” side projects, rather than something I really believed in as a product.
In february I gave an interview for a modular synth blog, and talking about the Shruthi, I said that I found VCOs uninteresting. I started thinking that maybe I was fooling myself about that just because I never went through the process of getting one to work, and started looking again into the topic – using the SSM2164 saw-core I had been playing with in simulation previously. Around march, the Ambika project was approaching completion and I was getting very frustrated about it – the development seemed never-ending, with many bad surprises on the road, and just like the Shruthi-1 I was a bit scared at the thought that users would never, ever, get enough of the project – and that it would be feature request after feature request for the months (years?) following the release. I thought about the analog minisynth concept again, and was seduced by the fact that it would be a pleasant change from Ambika – the “one knob per function” interface people expect from a monosynth would keep the project away from feature creep (and hardware is much harder to upgrade than software!); and it seemed like a relatively small and easy thing to build compared to Ambika. I breadboarded my 2164 VCO + a SVF using the remaining half of the chip, a LM13700 VCA (I did not want to use another half SSM2164 for that), and it was indeed fun to play with!
One thing still bothered me: there were already many one-VCO synths on the market (Minibrute, Dark energy, Domino, Nanozwerg), and I wanted to go a bit further sonically. Adding a second VCO? No, too much circuitry. Adding a digital oscillator à la Shruthi-1? Not an option indeed. I had committed myself to use a high resolution/refresh rate for the CVs (to make it as smooth as possible), and there wasn’t enough CPU left on the ATMega328. The bit of inspiration came from reading Tom Wiltshire’s page about the Juno DCO. I realized that a DCO is really just a VCO with sync signals coming from a digital source. So I decided to route a sync signal into the VCO to achieve a hardsync sound – the sync signal source being the 16-bit hardware timer from the 328p. This worked great and allowed both DCO-like operation (something I did not really bother with) and buzzing hardsync sounds. It did not take long before I decided to route this digital sync signal to other places – the VCO current source, to get linear FM, and the mixer, where it could be used to widen the VCO sound. Another idea that came to me for enriching the sound was to find a use for the unused half of the LM13700. When overdriven, OTAs add a characteristic “tanh” saturation to the signal. Furthermore, I had already observed on the Ambika SMR4 board how their built-in Darlington buffer yields clipped/asymmetric signals when incorrectly used. This gave a straightforward but very useful distortion/fuzz circuit. At this stage the design looked good to me. I decided to add a bunch of modular-style connectors, given the number of requests I had received in the past regarding interfacing the Shruthi-1 with modular gear. I came up with a first layout which used two rows of pots, one for the synthesis functions (directly wired into the analog signal processing chain), the other software-defined, serving as ADSR/LFO controls, general system settings, and 8 steps + length control for an analog-style step sequencer. Writing the firmware was a very, very quick affair, given that I already had the basic code tree for a monosynth ready (If you’re wondering, the very first piece of code I write on a new project is voice.h and parameter.cc). I got a proto made and as usual: the firmware was running flawlessly on first boot minus all buttons and pots operating backwards, but the whole thing felt broken.
Things that were wonky:
- The analog-style 8-step sequencer left me unimpressed. I still don’t understand how people can be comfortable with the idea of programming 2 or 4 bar note patterns with *knobs*. I decided to bet all my money on the 101-style sequencer instead, even if it made it impossible (for the moment) to use the unit without external MIDI input.
- Once 15mm knobs were fitted on the pots, the layout was not very usable.
- The long row of controls made it hard to remember which pot does what.
- In a case, it would have looked fugly. Especially since it would expose A LOT of solder joints, and there would be a lot of white space.
- The VCO tracking really sucked.
To solve the layout problem, I bit the bullet and re-laid out everything on two boards – something I originally wanted to avoid to make for a slimmer case and bring the cost down. It took two whole days, but the readability of the layout greatly improved. Bonus: the boards were now small enough to be mounted behind a Eurorack panel – this mattered to me since I was starting building my little modular setup at that time.
The VCO problem was a tough one, because I spent a lot of energy trying to solve the wrong problem. My original focus was on the integrator reset time in the saw-core, but even by faking Spice models of the crappiest JFETs, or of op-amps with the most horrible offsets, or of comparators slow as molasses, I could not get in simulation something as bad as what I was measuring. That’s when I decided to pimp my measuring equipment and get something more serious to probe the integrator current source…
After a few days blocked on the problem, everything pointed to the exponential converter – something confirmed after rapidly breadboarding a standard transistor-pair expo converter in place of the 2164 and getting 5 octaves of tracking minus temperature stability. It took me a while to figure out what was wrong… It turned out that the SSM2164 input is not the ideal virtual ground I thought it was! The problem was solved by increasing the value of the resistor at the 2164 input (from 15k to 100k) – less current, less non-linearities affecting scaling.
With that problem fixed, I got a second proto made. The interface was much better, and the VCO tracked well over 5 octaves (the remaining error was due to the integrator reset time and some errors in the 2164 expo response – would have required more circuitry for compensation…).
What remained to be solved was what to put in place of the analog-style step sequencer. I quickly toyed with the idea of an 8 steps waveform editor for the LFO (a kind of sequencer but at modulation rates) – wasn’t good. I had 9 knobs, roughly 40% of the CPU after some optimizations, and 2 days left before a demo at Modular Square. And this is when I decided to ressucitate… eigendrums. This was a quite old drum machine concept inspired by an ISMIR paper by Ellis and Arroyo. Let’s do it!
First problem was how to get audio out of the MCU. Solved by hacking the board to hook up the MCU PWM pin to the output op-amp (I also tried hooking it into the VCF->VCA, but then it sucked because a synth note had to be played for the drums to be heard). I had to swap the functions of two MCUs pins for that. Damn, the Shruthi-1 made me sick of PWM and I was at it again…
Second problem was to code a drumsynth with a very low CPU requirement. Simplest approach that could work: a digital sine oscillator with AD envelope for pitch and amplitude + noise + ringmod. Since I decided to dedicate only one knob for adjusting the tone, the knob worked as a morphing control through various combination of parameters that I programmed into the unit through CC with a MIDIpal, writing down parameter values on paper. Later I ditched the ringmod and used a sample for the HH.
Third problem was the drum pattern generation. I wanted something fun to play with, that could generate musically interesting patterns, and that could allow the classic “build-ups” found in electronic music. My first try, the straightforward eigenrhythm implementation worked well, but it was not fun to play. At this stage, I had 6 knobs controlling the 6 principal components + 3 for tone control. The problem was that the last knobs had decreasing impact (PCA “sorts” dimensions by decreasing variance), and that there was no way of getting dense or sparse patterns – the PCA learned the “average” density of a pattern and I wanted a way of bringing in outliers too! What I wanted was something more like an euclidean sequencer, in which it is possible to control sparsity/density. So let us work backwards… With 3 knobs dedicated to a sparsity/density control for each instrument (how would it work? TBD…), this would leave me with only 2 knobs for controlling the base structure of the drum pattern. How to map the space of drum patterns into a 2D space? My first approach was to get a bank of presets, lay them out in 2D using the first 2 axes of the PCA, and use the 2 knobs as X/Y coordinates in this space (did it before, see figure 4). Problem: the map has gaps, and I don’t want to store many patterns in flash. Same problem with LLE. I want hmmm… something like a topology-aware VQ that would give me a grid-like codebook? How is that called? Kohonen map! That’s how I decided to use self-organizing maps to build a 2D grid of patterns. Last open problem was the sparsity/density control – once you have a pattern, how to progressively remove notes to turn it into an empty pattern, and how to progressively add notes to turn it into a grid of sixteenth notes. If, like me, your past job involved counting n-grams and solving combinatorial optimization problems, the solution gets very obvious: select the sequence of note addition/removals that visits the rhythmic patterns with the highest frequency in a corpus of drum loops. The library is the grammar. Nice side-effect: randomly walking through this Hamming neighborhood creates musically interesting variations for free! I made it in time for the Modular Square demo – though the whole thing was trained in a small corpus of rhythms (larger model on its way – it did not make it into the first version of the Anushri firmware).
After the Modular Square demo, I noticed something odd about my proto… When switching from the single board layout to the 2 boards layout, I blindly rearranged the available controls without realizing there was room for more! I added two knobs, changing the number of software knobs from 9 to 10 (functions: VCO detune, velocity destination control, and a bitcrusher for the drum section which is the FX maximizing the wow/CPU cycles ratio – leaving all competitors in the dust); and adding a knob for sub-oscillator level. And more room for 2 modular I/O jacks! This is how we got to the present third revision of the board.
The final touches to the firmware included a secret way of overriding the drum sequencer with a x0x pattern input on the keyboard, a way of sequencing the drum section from an external source, and bringing back the CC drum sounds editing I used during development.
All in all, the project took 1 month of hardcore development to get the basics done, 3 months to feel “right”, and 4 months of polishing – which are the ratios I am getting used to!
Ambika was born on April 14th 2011.
It took me quite some time before I stopped rejecting the thought of a Polyphonic Shruthi. Many Shruthi-1 users were telling me that it would be an easy job given that the Shruthi-1 is polychainable (“just make a ‘slave’ digital control board without UI”) – but anything that would reuse the Shruthi-1 platform looked like a good way of making an unresponsive, buggy, unnecessarily complex and unreliable machine. The Shruthi-1 has been designed as a compact monosynth, so let it be mono! What also scared me the most was the thought of manufacturers of the late 70s (like RSF in France, and to some extent, Moog) going bust the very moment they started getting into polys. Bigger machines are expensive to design, prototype and troubleshoot…
On april 14th, I sent for manufacturing the first prototype of the dual SVF board, and I started looking at it from a different angle… In stark contrast with my “flagship” filter of the time — the SMR4 which required more than 120 parts — the SVF core used on the dual SVF board required much less parts, around 50. Suddenly, putting many of those circuits on a single board seemed feasible. After considering very shortly the thought of this single board thing with 3 voices (Trimurti, with a target part count of 400-500), I decided to go with a voicecard-based design, each voicecard being the lowest partcount Shruthi-1 voice I could do.
I quickly laid out a board for an ATMega328 (which would run the Shruthi-1 code) + a SSM2164 SVF (2-poles, CV control on resonance and the remaining expo gain cell used as an expo VCA), got excited when it reached a part count of 64, and then started thinking of a motherboard to host those. That’s where things got tricky.
It took 6 weeks to converge to a first “good” design for the motherboard. Things that had to be considered…
CPU-wise, it was a no-brainer ; because I wanted to reuse as much as possible the Shruthi codebase – so 644p for the motherboard, 328p for the voicecards (to keep them small and because they don’t have a lot of IO to do). But wait… will it work? I had to try! The actual splitting of the codebase into master/voicecard sides was just a week of work. The voicecard code was running fine on an Arduino “upgraded” with a 20 MHz crystal hooked to a Shruthi. It’s a bit strange to write code for hardware that doesn’t exist, but I went all the way and wrote the firmware for the non-existing master board in 3 weeks (In case you are not familiar with software: writing 90% of the functionality takes 10% of the time. Getting the remaining 10% right is what takes most of the time). I would have hated myself for designing a board for the 644p if the master firmware did not fit in!
Synchronization issues. On most polys, LFOs are shared by all voices – this is different from the “6 individual synths” approach I was getting into. So I decided to get the master MCU to run the LFOs and update the voicecards at 1kHz. This allowed a better MIDI > LFO synchronization. The downside is that this causes a lot of traffic from the master to the voicecards.
Master>slave communication. MIDI was too slow, even with a maxed out serial link at 115kbps. SPI with a custom protocol was the right thing to do, and well suited for a single master, multiple slaves setup.
How to connect the voicecards? Originally there were 6 individual 10×1 connectors. It took me 3 weeks to realize that the voicecards could be better stacked and share some common pins. Then a long detour was taken in the land of ribbon cables, to converge to the current solution.
Which audio outputs to provide? I wanted the proto to be very compact, so originally it had only 3 stereo pairs (2 voices per output) and a global mix, rather than individual outputs.
Storage. The original design had 2 512kb eeproms, but multis were going to take a lot of space. Luckily I was working on a sampler at the time and had explored the SD-card territory so I switched to that.
Power supply. I thought a single 9V DC supply for the analog side was a good idea. The Shruthi-1 got me used to working with little headroom.
UI. The 2×40 LCD has been there from day 1. What changed during this design phase was the number of LEDs & switches, and the position of pots. The original design had the 8 of them below the screen, in a staggered arrangement, and kept the 6 switches / 8 mono LEDs approach of the Shruthi.
I must have rerouted the motherboard more than 12 times due to all these changes… This brought us to june 6th when the first proto was ordered. I thought I had really done well with the design and “got it right”. I was pleased to see that the controller firmware that I wrote blindly worked with only minor (but tricky) bugs. But firmware was really the exception because there were so many things that were blatantly wrong with the hardware:
- The SD card level shifter, which I took from Adafruit’s datalogger shield, was not working. The catch is that the 74hc125 has to be in a specific family (SN74hc or 74ahc) to be used as a level translator. Bad idea for a DIY project where people expect easy to source parts. So I switched to a 4050, which is perfect for this job.
- The voicecard had to be disconnected from the SPI bus when reprogrammed, because I had not thought about isolation between the ISP connector and the SPI bus.
- The first attempts to get the master MCU to talk to the voicecard failed, because while the ATMegas can talk SPI quite fast, they don’t receive it at more than 4 MHz.
- After a few hours of use, it really hit me that an expo-VCA is awful for the main volume envelope of a synth.
- There were random signal integrity problems on the SPI bus, which made me think that my routing was flawed (large slots in the ground plane).
- Using a +5V regulator to get a mid-rail is not a good idea.
- At the time I was starting to realize that PWM was really hitting the Shruthi performance in the high-end – I had to add poles at 18 or 20kHz on the filter boards to tame the PWM carrier – and since we expect polys to play stringy, bright sounds, I decided it was time to switch to a proper DAC. What really got me into this was the discovery that writing to a SPI DAC on an AVR is quite efficient if you don’t use the dedicated SPI port (which would be used for master/slave communication anyway), but use the USART in synchronous mode instead. Using a 12-bit DAC could solve another problem – it provided enough resolution to invert “in software” the log response of the SSM2164 to get a proper linear response!
I had the whole summer to think about those problems and fix them (had to buy a logic analyzer to track down the SPI bugs, and got caught too many times by silly “forgot to switch the scope probe between 1x and 10x” problems). The most significant change was that I reworked my circuits to use a buffered LM4040 +5V reference as a mid rail. Then I realized that with all the changes to the voicecard, it was getting closer to 80 parts than the original 64 – so it might be worth doing a 4-pole filter too. Which I did within a budget of 100 parts. Boards for version 0.2 came in mid september 2011. The obvious mistakes were gone, but as I built more voicecards, I was shocked to find that the noisefloor was bad. 70dB with one single voicecard, but up to 62dB with all cards in place. This was painful to solve, hard to measure, and ultimately I found that there was no way I could make it work (would have required a single reference to run all over the board, and a copious amount of bypassing for it). So I decided to rip up everything and restart from scratch, assuming ideal +/- 8V rails. Surely noise would go away… I worked out a simple AC supply (half-wave rectification) and sized the input caps properly in LTSpice. This couldn’t fail… All the board shuffling that came at that time to accommodate more regulators and heatsinks made it feasible to have one row of 6 slim Neutrik connectors with individual outputs, so I went with that! It would have been silly to get people to buy 1x stereo -> 2x mono cables anyway…
I got the v0.3 at the beginning of 2012. What I heard after having plugged the first voicecard and cranking up the volume was not the pink noise of the previous version, but a loud 50 Hz hum – which of course wasn’t there when the thing was powered by my PSU proto. I entered “freakout mode” and sought help from more experimented people around me. The solution, which came after 3 sleepless weeks, was that 1/ split ground planes are not helping; 2/ the rectification circuit should be kept as tight as possible, and as close as possible to the power connector. My problem was due to strong AC currents from the rectifier flowing over a long stretch of the board, with my “analog” ground planes and my “digital” groundplanes connected to both sides of this long stretch.
And thus came v0.4 in mid february, which solved all the noise problems. Hurray! But then more problems appeared. One of them was that the 7908 was sometimes stalling during the boot sequence. It took me a bit of time to realize what was happening, but the fix was fortunately simple. This is why you still see a schottky diode stuck between some supply pins on the proto v0.4 photos
Another thing that hit me was that the original 4-pole filter design, which used the darlington buffers in the LM13700, wasn’t as ballsy as I wanted it to be. Given that I was working on a SSM2164 4-pole for the Shruthi-1 at that time, I thought about doing an additional voicecard using this new SSM2164 4-pole. I also hoped that it would do better in terms of partcount – but it turned out it was more complex than the LM13700 version!
The last problems to be solved were mechanical. I had sent a v0.4 proto to Frank for him to make a plexi case, and he realized that the mechanical assembly would be too complex if the voicecards were screwed to both the motherboard and the bottom of the case. He came up with a different scheme in which the voicecards are “hanging” below the motherboard, which is attached to the bottom of the board with 40mm spacers (or 25mm for a slim 3-voices assembly).
The first version with the “right” set of holes was v0.5, and actually building it into a case revealed a few more problems – the audio connectors were too close to each other (what’s the point of those Neutrik slim connectors if they can’t be laid out side by side?) ; the DC jack was popping out…
These were fixed in v0.6… The last hardware problem to be identified was the excessive heat produced by the full set of boards powered by 12V AC – enough to cause a nasty smell when housed in the plexi case. This was solved by dropping the input voltage requirement to 9V AC and using LDO regulators to handle the lower input voltage.
One interesting thing to note is that at its inception, Ambika aimed to be a few notches above the Shruthi-1 in terms of synthesis features. However, I could not resist backporting to the Shruthi-1 codebase most of the features specifically written for Ambika (oscillator algorithms, mixing modes). In particular the great code rewrite that unified both oscillators and allowed both of them to run the “vowel” algorithm came from the Ambika codebase.
By the way, if you’re wondering about the name, it came from this which was hanging on the wall behind my desk. Summarizes quite well the “1 master MCU / 6 per-voice MCUs / mix and match voicecards” approach (and yes, I have a thing for early indian “calendar art”).
A new synth is coming soon from Mutable Instruments… and this time it is polyphonic!
Here are some of the key features:
- Up to 6 voices, each with an individual output — in addition to a global mix output.
- MIDI channels/patches/voices are distinct entities, allowing many different flexible configurations, from 6 independent monophonic parts each on a different MIDI channel, to 1 polysynth, with everything in-between (unison, keyboard split, layering, voice doubling).
- Connectors for up to 6 voicecards. In true Mutable Instruments spirit, you can mix and match voicecards with different filters, and in the future with different synthesis engines.
- Easy to use sound programming interface with a large 2×40 LCD display, 8 knobs, 8 switches and 15 bicolor LEDs. Each module of the synthesis engine has a page, each page has a direct access button.
- Massive patch memory, easy backup/data exchange, fast firmware upgrades with the integrated SD card reader. And there might be other things you’ll load from the SD card in the future…
- Patch versioning and undo/compare/redo of editing operations.
- Sequencer, arpeggiator and rhythmic chord generator available for each part. 2 step-sequences per part. Each part can be clocked at a different multiple of the MIDI clock.
- And of course: DIY friendly, through-hole assembly.
Each voice is on its own circuit board. Yes, it’s huge and it draws a lot of power!
The first 3 voicecards that will be released are based on a maxed-out version of the Shruthi-1 sound engine.
- All the Shruthi-1 oscillators goodness – classic analog waveforms, FM, wavetables, vowel synthesis, low-fi tones.
- More z-family oscillator waveshapes, with digital emulations of analog waveforms sent through resonant LP/BP/HP filters.
- “Wavequence” mode for individually addressing the content of the wave memory (wavequence + step sequencer = wave sequencing).
- New mixer with adjustable overdrive and bitcrusher effects, independent of the mixing mode.
- 3 synchronized LFOs shared by all voices in a patch with new waveforms, and 1 desynchronized, per-voice LFO for subtle voice modulation effects.
- 3 ADSR envelopes with times up to 60s.
- Large modulation matrix (14 slots, 4 modifiers), with new modulation sources and destinations.
- Improved sound richness/brightness and extended filter range.
- 3 flavours of voicecards: Warm and classic 4-pole low-pass (OTA-C with Darlington buffers), sweet and liquid 4-pole low-pass (SSM2164), 2-pole multimode (SSM2164).
What will come after that? Voicecards offering a few channels of drum sounds. The following voicecards are in development:
- Multi-channel drum samples ROMpler.
- Analog drum module (2 instruments per voicecard).
How does it sound? You can listen to many sound clips here.
Ambika is a DIY project, all the technical choices have been made to make it accessible to DIYers, and it will be sold primarily as kits.
Last november, while working on the sequencer feature of an upcoming Mutable Instruments, I ran into a very musically interesting class of “bugs”…
Given the memory constraints of the project, I was left with 64 bytes of RAM to code an interesting sequencer feature. What should I do with that? A single 64 step sequencer? Two 32-steps sequencer? Two 16-step sequencer with note value / velocity like on the Shruthi? I ended up following a very liberal approach and offered two step sequencers with up to 64 steps, and condensed/complete modes (simple note value on 1 byte vs note/velocity/CC on 2 bytes). I say “liberal” because there is a caveat: if you want it to do things that do not fit into the 64 bytes constraint, shit happens. For example, if you define sequence 1 to be 32 steps and sequence 2 to be 16 steps, everything is fine ; but if you define sequence 1 to be 34 steps and sequence 2 to be 32 steps, the last 2 steps of sequence 1 will share the same note data with the first 2 steps of sequence 2. Instead of trying to restrict such situations in software, I just let them happen, and there was all kind of interesting sequences coming out of this – weird polyrythms in which changing an element affects two tracks at the same time ; velocity data being interpreted as note data, or notes being turned into velocities.
I contacted Dan Nigrin, author of a few esoteric sequencers like the Klee or this take on the classic M185 from Roland. We brainstormed on the concept of a sequencer exploring this idea, a limited space of notes traversed by mini sequencers biting each other’s tail. Did you know that some demomakers are using a subset of a demo’s binary code as data for waveform generation, texture generation, or even camera motions? I like the idea of trying new ways of making the most of a limited set of notes… Composing a melody and then realizing that playing it at half the speed, shifted by one beat, gives an interesting rhythmic pattern for a bass drum.
This sequencer, called “CycliC” slowly took shape… As a programmer I’m daily dealing with modulo arithmetic, circular buffers, pointers wrapping around, and it never occurred to me that this sort of behavior has to be visualized. But Dan came up with the circle visualization to make it look more intuitive. How does it work? You define a set of 32 notes and on/off steps; and create up to 6 mini sequencers traversing subsets of it at different speeds. If only Steve Reich had that!