What's the future of synthesis?
  • Hi guys, just wanted to share this with you:

    Stanford University are currently working on a new FMHD synthesis protocol with improved sound generation properties, and a real-time property modelling editor, that allows the user to encapsulate Karplus-Strong and other physical modelling aspects with relative ease.

    Said that, what’s the future of synthesis? FMHD? Granular? Better virtual analogic modelling? ...??

    Are there sonic possibilities not yet explored by the technologies that are accessible nowadays?

    What do you think?

  • Personally my future of Synthesis has an analog Filter, most suspiciously designed by some french Guy….
    About the Oscillators i don’t really care as long as they produce something musically useful (at least to me).

  • It’s all about the interface. I’ve managed to crank out more sounds in seconds with the XT thing than I could have without it.

    Not that the existing interface is bad. I guess having known that more knobs and sliders is possible it spoils your perception.

    Analog and digital doesn’t really matter to me. The problem with most commercial hardware boxes is they’re trying to release something with minimal costs. If their output DAC and code was rendering to 24-bit 96Khz or so then perhaps aliasing wouldn’t be so bad.

  • I’m assuming we’re not talking about the future of Shruthi synthesis but the future of synthesis. I agree with 6581punk that it’s about the interface. I don’t really care about analog or digital, I only care about getting sounds that are useful. I have a Kurzweil K2600XS, built in 1999, that always amazes me. Considering its age, this synth is easily more capable than anything I have tried since. I prefer it over the Korg Oasys, Yamaha Motifs or anything else I’ve tried lately. But, the VAST synthesis architecture is tricky to learn.

    What I would like for an interface are increasing levels of complexity. So, level 1 is “idiot-proof”, where I could specify some aspect of the sound and the interface would adjust multiple parameters. Then we could move into more complex models and eventually end up at something like the Nord G2 interface, where there is a plethora of virtual modules waiting to be inter-connected.

    The other idea I had considered was to use a bunch of iPods, each running a single module. So, instead of buying various modules for a modular synth, I end up with 20 iPods (or something like that, maybe less-expensive), with each running a module app and each connected to the other(s) somehow by virtual cables.

    Just some random ideas really. I think that interfaces should be software, it’s less expensive and inherently more agile than hardware. Sound generation could be anything, I don’t really care.

    Randy

  • I agree about the interface as well. It shouldn’t get in the way of manipulating the sound generation I think, so maybe there is a limit to how complex the sound generation can be and still be easy to control.
    A customizable interface would probably be a good thing, but if you like hardware it could get tricky. Not enough knobs etc. :-)

    One cool project you could check out is Din
    It uses bezier curves to define the waveforms. Certainly something different.

  • I have so many devices where so much potential is unrealised due to poor interfaces. Software editors can help a little. But even a mouse and GUI is a limited interface, it is not tactile. I like to feel controls in my hand.

    It’s why I find soft synths less than gratifying.

  • I only partly agree with the Interface Problem. The MicroWave for Example is a pain in the Ass to program, yet so rewarding (to me) that i don’t care about the Interface. So at last, its all about the Sound. Whats a good Interface worth if the sound the Synth produces is shitty?

    On the other Hand I must admit that FM for instance would be a Kick in the Ass if there only would be a new UI to it.

    But as long as the Big ones try to emulate things that could easily done with a few OpAmps (presumably because the knowledge has been lost there?) by Zillions of Transistors and many many Layers of Software i don’t expect much. In fact looking at todays range of Synths from the “Big Ones” there is nothing i would like to buy (ok, i need more Shruthis…) neither for the Sound nor for the Interface. I don’t see much Future coming from there….

  • It depends on the synth. When I talk about unrealised potential I’m talking Korg Wavestation. Its wave sequencing is tedious to program and I have the SR as well. Which has less controls on it than an old FM synth.

    Back in the day I had the time to program sounds with such minimal interfaces and I had limited funds to get anything better (although minimal interface rack mounts were all the rage).

    Controls are not just about editing either, they’re performance aids.

  • +1 for the beautiful yet UI wise totally fucked up Wavestation…. I use it as my Masterkeyboard because i got so used to its Keyboard…

  • Then you have a great UI attached to an average sound engine. The JD800.

    Subtractive is pretty easy to create a Ui for but FM less so. But the PreenFM is pretty good for something so small. The demos don’t really do it justice.

  • + another 1 for the Wavestation, I really miss it. A resonant filter would’ve made it a totally killer synth. The Wavestation is one of the reasons I was so dissatisfied with the Yamaha S90. How could such a much newer synth have such a terrible UI compared to the older Wavestation!

    I also agree that there isn’t much out there from the big guys that’s particularly thrilling. I’m really anxious to see what Kurzweil releases one of these days.

    My ideal thing I think would be an 8-voice Shruthi that is 8 separate Shruthi engines controlled via USB from a software panel. I may spend some time sampling the Shruthi with the Kurzweil so I can play it polyphonically. Of course, it won’t be the same.

  • Here are my 2012-2030 predictions. Some of those might happen in 5 years, some others in 2030 only.

    • New Vintage. Fundamental theorem : People want their synths to make the sounds that were on the radio the year they got laid for the first time. This causes quite some inertia. Corollary: There’ll be something totally “vintage” and desirable about cheesy VAs.
    • Hyper-resolution. It took time for VAs to sound decently because anti-aliased waveform synthesis is tricky ; and because discretizing analog filters is also tricky. We’re smarter at that but I feel it’s the wrong battle and brute force will work better. I predict that increase in CPU power, the use of new computing architecture (GPU-like without the latency problems of GPUs), or the use of dedicated hardware (FPGA), will cause designers to design synth engines working at very high sample rates (say 10 MHz), or stretchable sample grids (like Spice does) to get rid of aliasing problems. I predict at least one innovation in this domain to come from Finite Element Methods. This will finally bring all the weird modular/analog stuff (audio-rate modulation, feedback…) to VAs. A lot of things will sound better because of that.
    • Component-level emulations. So far the process of programming VAs is very empirical – one pinch of sampling, one pinch of theoretical analysis of schematics to derive transfer functions, one pinch of measurements for response tables. I think this is going to be made obsolete by systems with enough computing power to run transistor-level simulations of the original circuits. This is insane brute force but it’ll happen.
    • Parameter estimation techniques from audio signals. We’ll be able to record a sound and automatically get a synth preset that approximates it. The same way we have envelope followers, we’ll have “followers” for every musically relevant parameter ; converting our samples into a bunch of automation curves for driving synths. The dream of the fairlight&synclavier – analysis&resynthesis – will happen, but with much better representations than sums of sine waves. Virtual minimoogs will be the new sinewaves ; and this will reduce the boundary between sampling and synthesis. People will sample a cello loop and alter the notes, the phrasing, without even knowing if this is achieved by messing with the original signal à la Melodyne, or by having recreated the sample with a synth model and just tweaking the parameters.
    • A synthesis-technique-independent parameter space. You were all talking about UI but you were barking at the wrong tree. No matter how many knobs and touchpanels you put on some synths, they’ll still suck because their parameter set is too vast and do not relate well to concepts understandable by a human. A Shruthi with 2 pots would be easier to edit than a DX7 with 20. There’ll be some effort to develop a kind of general purpose parameter space for sounds (with 30..40 dimensions) with the following properties: 1/ any combination of settings yields a musically interesting sound (so “random presets” are never static or chipmunks squeezed in a fax machine). 2/ a non musically-trained person can roughly explain with everyday words what each parameter does. 3/ there exists a methodology to automatically learn mappings between the parameter space of a synth and this parameter space. This might become the biggest standard in music after MIDI, and will allow portability of synth sounds from one synthesis engine to the other.
    • Physics in modulation sources. We’ll be able to use physical processes (a ball bouncing on the floor, the impact times of a stack of 1000 boxes falling on the floor, the velocity of an object thrown from a cliff) as modulation sources. The most common physical processes in music instruments (oscillation… exponential dampening) have corresponding modules in synthesizers (VCO... envelope generator), inherited from the days of analog computers. We’ll move one step beyond this. One signature sound (and maybe one music genre derived form it) will come out of the use of a physical simulation as a mod source in a synthesis process.
    • In 2030 we’ll have 10 PB storage devices, which means that all music ever recorded by mankind will fit on them – so we’ll carry around our own copies of the celestial jukebox. We’ll “retrieve” music instead of “sample” it. This might lead to new approaches to synthesis based on indexing. Say you program a drum pattern on a drum machine with the celestial jukebox loaded into it. The drum machine searches it and notices that this is a groove from a rare 70s thai funk track and suggests you to play a sample from this track instead. Or at least, to sample the groove template and apply it to your pattern.
    • Physical synthesis itself will be big, but not for musical applications. The biggest drive will come from videogames and hollywood. If we want a video of a spaceship that blows into pieces, we build a 3D model of it and run a physics simulation of the explosion, not by building a cardboard model and blowing it up. If we want the sound of the titanic hitting the iceberg, we’ll also get it through simulation rather than by getting a guy hit a saucepan with some ski boots into Kyma. Similarly, the SFX of videogames will be computed straight from the 3D models and the physics engine. For movies, we’ll do the CGI and the sound design with the same toolchain – the same way there are 3D artists and texture artists, there’ll be a new kind of guy in the team setting up mechanical properties for physical simulation and audio generation. We’ll have a new name for those pieces of software simulating reality on both the audio and visual levels.
    • Analog synthesis will still be around as a niche. All new products will be fully discrete because the LM13700 will no longer be manufactured, and THAT/CoolAudio and the likes will be out of business. Think of class D amps, switching supplies, software radios… Many things done today with analog functions will be done in the digital domain.
    • One widely popular, culturally significant instrument (the kind of instrument for which there’ll be a classical repertoire) will originate from a smartphone app.
    • Vocal synthesis will still be stuck in the uncanny valley — but led by the generation who grew up with lady gaga and autotune, our acceptance for the uncanny will be greater than ever.
    • Audio source separation won’t still be solved, and we’ll look at our effort in this domain with the same “what the hell were we thinking?” air as we look from 70s MIT-style AI trying to solve language processing.
    • MIDI will still be around.
    • One of the basic assumptions made to derive these predictions such as the availability of energy allowing continuous technological advances and the free use of electricity for musical purposes, a state of peace and prosperity in the world allowing the pursue of such futile matters, the existence of human civilization on this planet… and others I can’t name… won’t be satisfied, making all these predictions irrelevant.
  • I think pichenettes may have ended this thread :) I came here to explain all my future predictions, but all of the stuff I thought about is just a subset of Olivier’s ideas anyway. Specifically I’m most expecting to see/hear “Hyper resolution” (although I think this would see more popularity in combination with granular synthesis techniques than VAs), “Parameter estimation techniques from audio signals”, “Physics in modulation sources”, “Physical synthesis will be itself will be big…”

    And yeah, MIDI will still be around, although I imagine some kind of MIDI 2.0 might be commonplace to carry the additional information utilised by more powerful techniques in synths, sequencers and other multi-media systems. It’ll be totally backwards compatible with current MIDI standards, though.

  • The more realistic the technology gets the more people seem to yearn for something more artificial sounding. This is at cross purposes with technology. I don’t think it’s going away anytime soon.

    I think user interface is huge, maybe as much as 50% of the whole equation. I think it’s one of the things that makes the minimoog enduring. It’s just so intuitive and human. Deceptively simple and difficult to recreate with new instruments. I’m looking forward to getting my XT/Programmer working!

  • One of the big points of an analog thing (say a minimoog or an arp, or whatever vintage instrument or machine you can think about) UI is the very physical relationship between your fingers and the sound. It’s a purely tactile experience. I’ve often felt a strong difference between twiddling a knob that’s directly, physically modifying the parameter that I want and a knob that’s modifying some software-driven parameter (and the shruthi is unfortunately part of the second category). And please, do not speak about tactile screens or a mouse! I don’t know if it’s a matter of precision, lag or feedback (the physical resistance is part of it), but there’s really a difference lying there that gives you a different feeling about the instrument. Playing a guitar, a real piano, a rhodes or a violin or singing is a purely physical experience. Playing a minimoog is one of them. Playing a software-driven synth is generally different because of the not-so-tactile feel (due to quantification / lag or feedback, i don’t know).

    I also remember viewing a video of one famous producer who explains that the things he hates about Pro Tools are its lack of touchy feeling compared to his big analog console, and also the fact that you actually need to read the track names and the parameter names to modify them : reading makes a big difference, since reading involves another region of your brain that the one used when listening to music, and sound in general. That totally makes sense, in my opinion. And that’s far more valuable than this “analog sounds better” cliché, especially when talking about mixing.

    So, in my opinion, here lies the UI question:

    • you need something that is simple enough to not think about where you’re putting your fingers, or whatever you want to tweak the sound. You only want to do it absolutely instinctively, and really don’t want to read (or count) anything to achieve your goal.
    • you need something that makes the relationship between the software / hardware / whatever you use and your ears totally unconscious, or subconscious.

    So, I believe that ergonomics really should play an important part in the future of synthesis, and music in general. That’s definitively one of the reasons why analog synths still are so popular. The thing is not about re-creating those classic sounds, or re-creating existing sounds. The minimoog sound once was revolutionnary, and still became an instant classic. The thing is about the way you manipulate your body to generate new shades of sound.

  • Well, the UI is one aspect I thought was important to mention ;)

    But imagine if you could play a sound to a synth and it would mimic the tonal qualities of it. Not sample it but produce a tone as similar as possible. Now that would be a good interface to a synth :)

  • Well, my point is that trying to recreate the exact sound of something real only has a certain application and, frankly, just only goes so far. People spend their whole careers recording orchestra instruments, brass, cymbals, etc. to get every freaking nuance out of it. So you have a multi-sampled instrument and you’re using umpteen different mic’d soundsources all midimapped and, so what? You sound a little like a cymbal or a piano. Meh. When I need this stuff I want it and I have software that provides it. It just doesn’t get me out of bed in the morning. If this is the future of synthesis, to make better synthetic versions of real acoustic instruments, well that’s boring. To me.

    What has always excited me and lots of other people is creating sounds that aren’t natural. They are a function of their electronics and their design. They don’t happen anywhere else. Whether digital or analog, that is still the thing that makes me happy and makes me want to spend money. New noisemakers making new noises. That’s what it’s all about.

    How well you do that and how well the UI is executed is the difference between cool and classic. There are lots of the former and only a few of the latter.

  • software that will do an accurately and quick analysis of a recording of a symphony and recreate it with you making selections to vary individual or groups of instruments, or the recordings of birds in a forest. Real realtime morphing under $500 with a physically playable interface. Realtime morphing of scale, tuning, timbre. Better integration of video to audio. Real choirs made synthetically.
  • @MicMicMan: Great post! Totally agree. I'm in the middle of building a modular synth and a massive part of the appeal for me is the tactile interface and knowing if I turn that knob there it will directly influence the sound there in such a way. Cause that is its only purpose in life. I'm not a luddite, computers are great and I'm a child of the computer age, but there's just something that doesn't fit with using a computer to make sound for me. Using it to mix and record sounds is a different thing.
  • >Well, my point is that trying to recreate the exact sound of something real only has a certain application and, frankly, just only goes so far.

    It’s a starting point. From there you can then alter it :)

  • I’m curious about how MicMicMan’s argument applies to polysynths.

    When you turn the cutoff knob on a MiniMoog (or on a modular), that directly changes a voltage at the control input of the VCF. When you turn the cutoff knob on a Prophet-5, its value is read by an ADC, written to some RAM location, and later sent to a bunch of DACs/S&H located on each voice. You can’t really have direct action on a polysynth, unless you find five-gang pots :)

    Does that mean that all polysynths lack the “direct, tactile” feedback of monosynths?

    Incidentally, this lag/feedback thing is why there are only 4 knobs on the Shruthi. Given the CPU constraints, 8 knobs with half the refresh rate felt too slow to me. That’s also why the programmer is not an official Mutable Instruments product.

  • @pichenettes does that imply that the Prophet-5 had more processing power than the Shruthi-1?

    a|x

  • @toneburst: The Shruthi-1’s ATMega644p * 20 Mhz has more raw processing power than the Prophet 5’s Z80 * 1 Mhz of course, but a huge fraction of this processing power is dedicated to computing the oscillators and modulations, sequencer, then parsing MIDI, handling the display, etc… There’s very little left for scanning controls, and this very little depends on patch complexity.

    The Prophet-5 MCU does nothing else than reading pots and sending CV (not even computing envelopes and LFO) – nothing computationally expensive.

  • Would a dual processor digital board help? one to read all the data and another to do much of the synth engine?

  • Yes it would help.

  • @pichenettes I see- because the oscillators, filter etc. were all analogue. Didn’t the later Prophet-5s have MIDI? Maybe there was a separate processor for that..

    a|x

  • And when the machines that followed started to do modulations (LFOs, envelopes) in software, shit happened in terms of UI (all those “keypad + data entry pot” interfaces), and shit happened in terms of CV quality (oberheim’s super slow envelopes).

  • This is a great thread. I have read it, front to back, several times now. Lots of interesting, thoughtful, and highly technical stuff. Very thought-provoking.

  • Of course FM and subtractive are both based on things used for other purposes. I suspect nobody has managed to get other electronic or signal processing processes to output anything usable.

    If you search around there’s lots of interesting pages about other processes for messing around with signals which might be possible to use to manipulate or produce sounds eg:

    http://www.music.mcgill.ca/~gary/307/week5/additive.html

  • When I play my K5000s I think that additive definately is usable. As would anyone playing a Hammond B3 for instance… Or some phase modulation stuff in the CZ or VZ series. Or physical modeling as in the Yamaha VL or the Technics WSA, or wavetables ala Fizmo or Waldorf…

    A fine example of additive is the deep note THX sound. That wasn’t done in real time way back then, but still…

    The K5Ks has the most important controls as knobs, but in order to really sculpt a sound you have to resort to editor software as there’s so much stuff that goes on under the hood. It’s very playable though!

  • @ Olivier : You’re perfectly right about the poly-synth thing, and I’m not able to give you an answer since i never had the chance to play one of those marvels. Probably some users here can give us some insight.

    I’m not telling that all of the software-driven synths (analog polysynths are some of them, obviously) are encountering this issue. But that most of the ones that i’ve tried suffer from it. I can give you the example of my eventide timefactor, which is overall a kickass delay : still, it’s slightly less entertaining to play with self-oscillation on it than on a simple diy pt2399-based delay because of the not-so-instantaneous feel i’ve got. It’s a bit like playing a video game at 30 fps and 90 fps. I suppose there are some gamers lost on this board and that they know the difference that i’m talking about. In theory, above 30 fps you shouldn’t feel the difference, but practically any experienced player could tell you that you need 60 fps or above to get a fine experience, and an overall good immersion. A synthesizers’ UI is all about immersion (or at least, it should be so). And I’m absolutely certain that software-driven synthesizers could achieve that. Probably there are some already. Still, there’s a lot of work left about the ergonomics for a lot of devices.

    But maybe i’m totally wrong and i’m only thinking that because i never got the chance of being blind while playing with some devices (i mean : i always knew the technology before playing it).

  • @pichenettes, so there’s a case for saying that if a synth is ‘digitally-controlled’, it may as well be 100% digital. I remember an interview with the guy who designed the Sunsyn, as much as admitting that the multi-layered circuitboards required to make his analogue polysynth work were so complex they were actually full of bugs, which he was struggling to track down.

    a|x

  • @toneburst: I don’t agree at all. A digitally controlled synth and an all digital synth will have some similarities in terms of UI (the relative “lag” and “lack of direct feedback” mentioned by micmicman if the CPU is overloaded), but they are still world apart in term of sound generation. Analog filters are hard to recreate digitally, and anything digital oscillating has some aliasing problems – which can be alleviated under some conditions.

    I don’t know what kind of bugs the Sunsyn had. From what I’ve read, it’s mostly software bugs – all kind of glitches/crashes due to coding errors and which have nothing to do with the synth being digitally controlled analog or just digital – but since you mention boards, it could as well be PCB routing problems (high noise floor, signals bleeding into each other), and those are typical of analog.

  • @pichenettes: Thanks for yet another well-thought out and informative post. Both analog and digital have their own set of inherent problems, the challenge for a designer is to avoid those pitfalls. The master designer does that and at low cost!

    In the end, I don’t care if the sound is made with a sponge, a transistor or a DSP. If the instrument is playable, sounds good and is rewarding it’s user it’s all good! Future classics will be easy to use and sound good. It’s that simple :)

  • Plus it doesn’t matter how good it sounds, once it has been recorded to 16-bit 44.1khz and compressed to hell then it seems nothing much will be left of it :)

  • @pichenettes I know what you mean. I love the sound of analogue electronics, but I’m not a purist about it. There are some kinds of synthesis that simply can’t be done by analogue means, and some of those sounds, I also like a lot. I was poking through the presets on my Nord Modular yesterday, and it struck me that the ones I liked most were often the ones that weren’t recreations of analogue subtractive synthesisers.

    Re. the Sunsyn thing: i’m pretty sure he was saying it was the complexity of the PCB routing that was the cause of the problems users had been experiencing. Can’t find the link right now though.

    a|x

  • Hi all!
    Firstly: great discussion and quality chat!
    I wanna go back the post that started this thread: “Stanford University are currently working on a new FMHD synthesis protocol with improved sound generation properties…”. Anyone has an inside info on current status of this FMHD (I presume FMHD = Frequency Modulation – High Definition)? I’m trying a similar thing, and surprise surprise, it seems like useful/meaningful/powerful/flexible/compact/etc etc etc GUI (including the middle layer between the GUI and the sound synthesis core) are in fact the major problematic parts, as has been agreed throughout this thread already.
    Anyway, just a few thoughts on the analog emulation: not to advertise any particular brand and company I’ll just say that i vaguely remember how certain emulation of the legendary Fairchild compresor using dedicated DSP has undergone a blind test involving professional studio engineers and succeeded in fooling pretty much the whole crowd it is the original Fairchild. Further, fairly recent analog circuit analysis/modelling methods have actually come to the quality level you can barely distinguish emulation from the true analog devices at about 15% CPU on a duo-core 2.7GHz machine. Really, they way I see it, we are just now entering the era at which old analog devices can truly be emulated at a reasonable CPU cost, so finally analog circuits will loose their last and most important advantage – the quality and character.
    Regards, Sash

  • @pichenettes

    “Analog synthesis will still be around as a niche. All new products will be fully discrete because the LM13700 will no longer be manufactured, and THAT/CoolAudio and the likes will be out of business.”

    But you forget that the technology is also developing in DIY world.
    Now, we have all that FPGA thing, and we are able to build at home a computer even better than Amiga 500. I think, in 2030 some DIY guys will have in their garages kind of “little factory” of ASIC chips. And it will be no problem making clones of SSM, CEM, LM13700 and build own chips :P

  • @MaxZorin: I highly doubt so. We’ve got so many cool things in the DIY world (cheap microcontrollers and development tools, FPGAs, CNC tools…) just because the demand for those technologies in the heavy industries (auto, military, medical…) drove the prices down. Atmel doesn’t make cheap ATMegas for the DIY market in the first place – they made the chip in the first place for a bunch of industrial applications requiring simple MCUs, sold enough of them to get good prices and pay back their R&D… and the DIY scene ripped the benefits…

    If there’s no demand for analog ICs in the big markets, I doubt any cheap technology for manufacturing them will appear in the DIY space. The example I would take is that of tubes. Is there any tech nowadays to make those at home easily? They feel so weird and alien compared with today’s technology, starting from their operating voltage. I suspect that something like a SSM chip will look equally alien when compared to 2030 tech – it just won’t “click” with the operating voltages, design techniques and philosophies of the time. We’ll surely have amazing tech to manufacture amazing things orders of magnitudes more complicated than a SSM2044, but the whole idea of using this tech to make a SSM2044 will be silly. Just like we have great tech for transportation of goods or information but it hasn’t lead to any amazing development in the field of horse technology.

  • There is a guy making tubes by himself, and while it does not look particularly difficult, I would not say that it looks easy, and you need a lot of stuff for the whole process..

    The question is, will there be demand for older parts, and if not, will someone be crazy enough to attempt to do it anyway, thereby (hopefully) triggering new interest in older parts..

  • Some guys actually build DIY tubes, but they’ve built a whole load of diy tools like pumps and stuff. They’ve learnt to master how to turn glass tubes into actual tubes… It’s not the average DIYer who’s able of doing such things. I’ve only heard of 2 guys worldwide who make their own diy tubes from scratch.
    Tubes are still used (and developped?) nowadays when very high power or very high frequency is required.
    For now, I’m constituting a nice stock of vintage soviet era tubes :) This technology is fascinating.

    But unlike Olivier, I’m not sure that we’ll see a major shift in electronic component technologies within a few decades – I mean, resistors and capacitors will stay necessary and I suppose that we’ll continue working with ‘low’ voltages (0-15v mostly) like the ones we’re handling since the 60s. About fine analog components (like burr brown opamps, ota/vcas), I’ve really no idea about it.

  • I think an increasing amount of stuff will run at low voltages, to the point where 3.3V or 5V will be like today’s 12 or 15V :)

  • Matches with the German Strategy for the Energiewende™.

  • That’s the USB effect, make things run from 5V or less.

  • SO well have to live with less headroom which doesn’t really matter because KIds already got used to MP3s….

  • We’ll have infinite headroom, the whole signal chain will be 64-bit floats :)

  • Damn, i forgot again that the future consists of using 250.000.000 Transistors to simulate the behavior of an 16 Transistor Möög Ladder Filter….

  • Not to mention all the lines of code.

  • Even with infinite headroom, we’ll still probably only use the top 5DB.

  • There are as many (or more) lines of code behind a “real” transistor. It’s running on The Matrix hardware (or Vishnu’s dream)...

    cue to picture of vishnu with Matrix-style letters falling out of the cobras.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion