Beiträge von lsuess

    What do you all think? Protein folding? DNA nanomachines? SPMs with molecular grippers like Drexler and Merkle and others mentioned years ago? Engineered bacteria and viruses? Laser, Electron Beam and other energetic beam systems?

    I guess a bit of all of them - viruses and bacteria only as means for producing artificially designed de-novo proteins.


    We had some related discussion about possible approaches here:
    What are the remaining lurkers up to?


    The next major milestones I'm looking for are:
    #) Putting
    hierarchically self assembled (already experimentally demonstrated in isolation)
    hinged and rigid (already experimentally demonstrated in isolation)
    DNA/Protein/Peptoid/Foldamer structures onto photographically etched chips (already experimentally demonstrated in isolation)
    such that they get aligned to the etched structures.
    #) Introducing high rate bistable electrostatic actuation via this chips surface.


    Then It gets more murky:
    #) Introducing mechanical demultiplexing in the self assembled sufficiently stiff foldamer structures.
    #) Making kind of like a protein block based 3D printer (on chip surface) and thereby get rid of unreliable high error rate self assembly.
    #) Switching to bio-minearlisation materials (I perceive a big knowledge gap here) - As elaborated in "Radical Abundance" Appendix
    #) Building micro vacuum chambers and finally switching to advanced materials like diamond and silicon. - Also as elaborated in "Radical Abundance" Appendix


    In one of his more recent presentation Eric Drexler presented an approach for in liquid free floating foldamer activation mechanosynthesis devices.
    He proposed to design
    #) a three axis site activator via hierarchical self assembly and
    #) use trichromatic light for directed motion actuation (already experimentally demonstrated in isolation).

    I guess you assume a level of molecular nanotechnology, and mechanosynthesis where you can place every element of the periodic table in any practical way physical law permits and pick atoms from any local molecular environment.


    Long before that level of capability will be reached earlier forms of high throughput advanced molecular nanotechnology, and mechanosynthesis
    (e.g. synthesis of diamond and graphene) will be available. Those forms of mechanosynthesis are likely to be rather limited in the way they can place/handle atoms. (Side-note: That limitedness does not transfer to the earlier products - completely different behaving mechanical metamaterials can all use the same base material.)


    Of course on the nanoscale atom and molecule bonding topology is always fundamentally reversible. But that does not mean that implementing the backward process technically is as easy as implementing the forward process. Actually there a number of reasons why implementing the backward process (disassembly) is much harder than the forward process (assembly).


    Since mechanosynthesis is basically a blind open loop control assembly process (heavily relying on extremely low error rates) once some in the past mechanosynthesized crystal-molecule assemblies have incurred radiation or thermal damage, mechanosynthetic disassembly can't be performed by simply running a nanofactory in reverse - this would just end in a mess. Even if the product crystal-molecules are damage free it's still difficult. Some assembly steps may have way higher error rates in reverse (that's where energetic reversibility comes into play).


    So I'm concerned about giant mountains of diamondoid waste piling up not because recycling is fundamentally impossible (which is certainly wrong)
    but because I see a timespan coming in which it is technically WAY easier (and economically cheaper) to just create systems for making diamondoid crystal-molecules by mechanosynthesis instead of creating mechanosynthesis systems that are also capable of taking diamondoid crystal-molecules apart.


    (Re-usage of microcomponents may mend that problem but only to a degree since they may become obsolete - analog to software.)

    One issue I have with predictions about AI is one that Eric mentions too briefly in his talk: goals and motivations of AI. All living things have a goal built into them due to the way life evolved: perpetuate oneself and one's offspring. Eric states we wont know or will be unsure of the goals AI will have. I don't see why we wouldn't, since unless we explicitly program goals in, the AI will have intelligence but it wont have any motivation to do anything with it. Without goals it wont have motivation to act for or against anything. Probably a good reason not to program in any goals.

    <sarkasm> All living things except humans in wealthy countries too smart for their own good. They die out. </sarkasm>


    Reinforcement learning means rewards must be defined. Thus neuronal AI without goals doesn't exist.
    We make AI to serve a purpose (even if its just nonsense "art") thus we train it with the data-sets we have readily available and tell it how good it does at reaching its goals. This already is leading to unpleasant discriminatory occurrences due to politically or racially or gender or ... biased training data.


    A major problem is that if an AI finds a way to cheat for getting its reward and its unsupervised it will start and continue to do so perpetually - potentially causing serious trouble. With multiple agents supervising themselves mutually there can be a self regulating system but by our limited intelligence we will barely have the means to know whether and where potential system instabilities lie and how severe they might be.


    Most scary I find the kind of personal assistant AGI's that google is currently very actively building. This is basically becoming a virtual version of yourself knowing yourself inside out. It - for the better or the worse - could live on after your death. If the user is/was an a*****e spammer you might get a very nasty AGI. BTW rouge AGI's are certain to collaborate if beneficial for them. A cyberspace inter AGI war may become a possibility.


    With advanced AGI sooner here than advanced nanotechnology humanity may come into the inconvenient "being a pet" situation without the option for anyone to upgrade the brains to keep up. This is all SciFi right now but if I happen to see this day I hope most of the serious flaws that current day software carries with it will become resolved before any mental upgrades begin - I don't want google/facebook/... in my head nor do I want dependency hell nor uncontrollably piling up entropy invariably leading to a system crash and necessary reinstall - which people still think its normal - IMO it isn't. <joke> Person A (panicking): "My new video driver isn't working properly I just see blue what should I do? ..." Person B (indifferent): "Just delete and reinstall the brain-ware you're running." Person A: "WHAT?!" </joke>


    What mystifies me about AI/AGI is the fundamental differences to the human brain:
    * just a few basic instincts flooding the whole brain as feelings - vs - very topically fine-grained reward structures
    * the fundamental single-threadedness of the human brain - ever tried to listen to two persons at the same time? - can this be learned?

    Someone is likely to prove me wrong, but I'm not sure that that foldamer engineering will bootstrap advanced APM. At best I suspect it may be a dispensable tool, much like one can use a screwdriver to pound a nail - though a hammer would be the better tool. On the other hand, there is no easy way to use a hammer as a screwdriver.

    The most recent developments in structural DNA nanotechnology (DNA oligomers I believe fall under the class of foldamers) made me much more optimistic about a bootstrapping pathway that relies on foldamers indispensably. Like roughly described in the appendix of Radical Abundance. (Note that with indispensable I don't mean indisposable. That is I do think that they can should and must be stripped away once bootstrapping was successful.)



    The main papers that made me more optimistic where these five:


    1) Demonstration of localized hinges and sliding rails:
    (Absoluterĺy essential for any robotics like action.)
    Papers name: "Programmable motion of DNA origami mechanisms"
    Found here: https://www.foresight.org/nanodot/?p=6430
    Full open content: http://www.pnas.org/content/112/3/713.full.pdf


    2) Hierarchical self assembly of structural DNA nanotechnology:
    (Essential for more complex systems)
    In the first step the normal method floppy DNA oligomers find and link. In the second step the finished assembled stiff cubic/hexagonal voxel grid building blocks self assemble by shape complementary (reversibly driven by varying salt concentration)
    Papers name: 'Dynamic DNA devices and assemblies formed by shape-complementary, non-basepairing 3D components'
    Found here: https://www.foresight.org/nanodot/?p=6606
    Full open content: http://science.sciencemag.org/…EI&keytype=ref&siteid=sci


    3) Bohr radius resolution manipulation with DNA nano-structures:
    (Essential for early forms of mechanosynthesis)
    Papers name: "Placing molecules with Bohr radius resolution using DNA origami"
    Found here: https://www.foresight.org/nanodot/?p=6890
    Somewhat hidden paper: http://bionano.physik.tu-muenc…/funke_NatureNano2015.pdf
    Supplementary info paper (BIG): http://bionano.physik.tu-muenc…ke_NatureNano_2015_SI.pdf


    4) Assembly of multi micron scale AP pegboards:
    (Probably useful for organizing bigger systems via AP self centering pick and place that lacks atomic resolution.)
    Papers name: "DNA brick crystals with prescribed depths"
    Found here: https://www.foresight.org/nanodot/?p=6350
    Full open content: https://yin.hms.harvard.edu/publications/2014.crystals.pdf
    Supplementary info paper (BIG): https://yin.hms.harvard.edu/pu…ns/2014.crystals.sup1.pdf


    5) Templated gold grwoth in AP DNA nanostructures:
    (Maybe useful to include stiffer parts for tooltips though this does not look too controllable - bulging)
    Found here: https://www.foresight.org/nanodot/?p=6324
    Papers name: "Casting inorganic structures with DNA molds"
    Full paper: http://www.ncbi.nlm.nih.gov/pm…60265/pdf/nihms641769.pdf



    What I'm still eagerly waiting to see is:


    A) Fast bi-stable electrostatic actuation of DNA hinge nano-structures via electric fields emanating from very small contacts on a chip surface.


    B) Demonstration of AP single moiety mechanosynthesis with water synthesizable diamondoid minerals (quartz/pyrite/apatite/calcite). This hasn't been demonstrated in with macro-scale AFMs either.


    I recently had some discussion defending the idea of advanced APM where I wrote a bit about my own interpretation of that pathway beyond of what is written in the appendix of Radical Abundance.
    You can find this all the way at the bottom down here:
    https://debunkingdenialism.com…omic-scale-manufacturing/


    Sorry about the amount of links here, but I think they're relevant.



    I think of SPMs as one tool of many that will be needed to bootstrap nanotechnology. That an STM has limitations is no different than other tools. Based on my reading of history, I think progress in nanotechnology will only take off once more "amateurs" can begin work on it.

    I believe one issue that got in the way was lack of money or capital - and uncertain demand. While a determined amateur can build a marginally working inexpensive hobby STM, buyers of commercial systems have higher expectations and getting a refined product to market is not cheap. Of course, back then we didn't have Kickstarter, GoFundMe, IndieGoGo, or RocketHub as options to raise capital and establish a seed of potential customers.

    I do too think of SPM as one tool of many that will play an important role.
    I think parallel AFM in the form of mechanosynthesis and pick and place action at the micro and nanoscale will do a big part of the work. Single tip synthesis might be useful for figuring out reactions or putting bigger blocks together but building up a diamondoid assembler with a single macro-scale AFM (the early idea now dismissed by Eric Drexler) by now seems to me like jumping to the moon with just your legs - metaphorical speaking.


    The problem I feel is that with what is archivable by DIY means and a little more professional kickstarter funded means (provided it gets funded) is not sufficient for making meaningful bootstrapping progress.


    I feel that some very essential tools will not make it to a widely available DIY state (e.g. cryo TEM tomography, UHV Systems - except something like my crazy micro UHV system idea miraculously works out, automatic pipetting systems ... to a lesser degree as I'll mention further below)


    For the reasons I elaborated above I think that systems of hierarchically self assembled foldamers will play a major role too beside SPM. Thus I'm thinking of using an top down AFM to picture and interact with bottom up foldamer structures. And those need some of these additional capabilities.


    About kickstarter funding:
    Who is really interested in playing around with an relatively cheap but not extremely toy-like cheap ~999$ SPM device beside a handful of geeks? I mean right now and not when it becomes really interesting due to APM bootstrapping beginning to succeed. (Kind of a "who would need a computer in his home" situation.)


    I doubt that major parts of bootstrapping will be done by a large DIY community. (I'm not happy about that)
    I think it's not unlikely that much of the bootstrapping will happen in service provider labs for early nanotech medical companies.
    Sadly this could quite likely result in that the products will be accompanied with a lot of closed source problems restrictions and regulations.


    I'm not saying that I'm certain that a cheap SPM kickstarter project won't work out funding wise.
    I just doubt that DIY to semi professional SPM living-room devices will play a major role in bootstrapping.


    In the macrocosm with repraps there's a lot less that can't be done DIY or is hard to do DIY.
    Also the products have immediate usability value.
    Even with these things in its favours evolution of repraps isn't crazy fast.


    To drift a bit off-topic:
    We still don't have a self replicating 3D printer that not only prints but also assembles itself.
    I think this should be possible (not a small machine) and could drive the cost fo 3D printers down further ~50% and give additional
    6DOF robotic pick and place capability (remember DIY massive automatic robotic pipetting system for DNA nanotech I mentioned before?)
    Also such a self replicating pick and place robot would demonstrate principles for self replication that uses standard prefabricated parts as building blocks. These principles then could at least in part be applied to nanosytems out of AP self assembled foldamer parts. And much later the principles could be used in the second assembly layer of advanced nanofactories - albeit with a wider less compact cycle meaning even more less generic parts.
    A self assembling macro robot is what I'm attempting with my reprec project Idea:
    http://reprap.org/wiki/RepRec
    This will soon grow, I had some major Ideas today.


    An other pathway for cost reduction beside reprap style cost reduction is miniaturization like in the computer industry.
    This pathway is big-company centric since obviously MEMS production isn't DIY doable.
    Miniaturisation of SPMs seems not to progress fast. There are some MEMS AFM approaches that still need humongous UHV systems.
    I'm not aware of any attempts of parallelizing SPMs that wield atomic resolution yet (millipede e.g. was never meant to have atomic resolution).


    Sooner or later there might come up the possibility of more or less self assembled nano AFMs ... ok I'm drifting off ...

    It would certainly be cool to make atoms visible in the livingroom but I gave up on this endeavour because:
    # I worked on a professional one (Omicron) and realized how hard it is to scan z steps greater than one or two atomic layers.
    # Playing around with structural DNA nanotechnology (and even more so for other stuff) requires a full-blown lab (automatic pipetting system ...)
    # I'm not sure whether larger DNA meshes can be scanned electrically with STM. If AFM is necessary it gets harder. DNA is strongly negatively charged due to its phosphate groups. If DNA structures sits dry on a surface I guess some alkali metal atoms (Na) remain on there cancelling the charge but I'm pretty sure they will be immobile.

    # Also there are very few surfaces suitable for in air imaging (HOPG, gold, maybe indium ...)
    I thought about the possibility of miniaturizing and demonitizing UHV systems and came up with the concept of this crazy contraption. (probably pure fantasy and a waste of time)
    https://www.youmagine.com/desi…y-miniaturised-uhv-system
    It still looks very dire when one is looking for commercially available miniaturized turbopumps.
    I'm not including designs like the one on built into the mars MSL rover which was made small by raising the price - conventional ultra-pecision machining.
    The non AP MEMS friction problem is probably the reason why there are no MEMS turbopumps yet.


    I'm wondering whether atomically precise de novo protein design could be used to create atomically tight positive dispacement UHV pumps for micron to mm scale chambers? There's probably too much low atomic weight dirt from the production process remaining and heating to>200°C would hurt most proteins. Maybe with intermediate steps - gold coating a template chamber ... drifting off again ...


    In one of the foresight conference videos there was one person presenting his companies goals to produce ultra cheap STM microscopes in masses to accelerate nanotechnology research by making the tools more widely and easily accessible - sadly I can't find that video anymore. I've also forgotten the name of the presenter. It might be "Saed"?? I'm not sure.

    Only insight I can note at this point is that false theories held up progress for decades due to the human failing of judging the merit of ideas by who espoused them rather than objective analysis of the ideas. Well, some ideas sounded quite reasonable too and evidence to the contrary was considered due to experimental error, not due to an error in the theory. It seemed to take more evidence than necessary to get a theory in trouble.

    Lipid Rafts come to mind. I heard that they may not really exist but they are everywhere in the literature.
    I don't really see how that applies to advanced APM. There I more see that correct and useful old ideas have been and are still tragically misused to judge possibilities/impossibilities in a very different context where they are not applicable anymore.

    I'm thinking of a novel design for a scanning probe microscope that I can 3D print. At least the mechanical part. STMs have already been 3D printed and the common plastics like PLA and ABS have decent properties for the purpose (such as low thermal expansion coefficient.) There is at least one piezoelectric plastic that is available (Polyvinylidene fluoride) available from at least one source (3dogg.com/c-3265319/pvdf-filament/) but expensive. Though my idea doesn't employ piezoelectrics, though more traditional approaches do.

    I thought of that too.
    Back then there was only one project around.
    It seems by now it is only accessible through internet backup services anymore :S
    http://archive.is/sxm4.uni-muenster.de
    Just for fum I made a 3D-model out of the plans they supplied.
    I wouldn't recommend to print this since it is absolutely not optimized for 3D printing - waste of plastic - ugly blocks.
    http://www.thingiverse.com/thing:42053


    One design that I especially liked is this one:
    http://www.instructables.com/i…%A1%AF%E5%BE%AE%E9%8F%A1/


    This is the one of the designs that currently rank top in the google search results:
    http://hackaday.com/2015/01/13…pe-sees-individual-atoms/
    Its so minimal that there isn't much remaining to print at all


    Here's a link to a link-list to several DIY STM projects that I found in my bookmarks:
    https://dberard.com/home-built-stm/links/


    It would certainly be cool to make atoms visible in the livingroom but I gave up on this endeavour because:
    # I worked on a professional one (Omicron) and realized how hard it is to scan z steps greater than one or two atomic layers.
    # Playing around with structural DNA nanotechnology (and even more so for other stuff) requires a full-blown lab (automatic pipetting system ...)
    # I'm not sure whether larger DNA meshes can be scanned electrically with STM. If AFM is necessary it gets harder. DNA is strongly negatively charged due to its phosphate groups. If DNA structures sits dry on a surface I guess some alkali metal atoms (Na) remain on there cancelling the charge but I'm pretty sure they will be immobile.


    Its scary how many of the links I dug through here right now are already dead.
    Luckily google image search and backup services give some chance to still find that old stuff.

    The reason I think that Eric Drexler has switched his focus is this video of a somewhat recent talk he gave:
    Eric Drexler - A Cambrian Explosion in Deep Learning
    Filmed at the Free and Safe in Cyberspace conference in Brussels in Sept 2015


    Also with Radical Abundance written and published I think a major load is of his shoulders.

    My interest in the subject is more as an area of problems that can be better attacked by nano robots than as a source for bootstrapping nanotechnology. My impression is that most of the advances in things like DNA and RNA manipulation (e.g. CRISPR/Cas9) appear to be due to discoveries of ancient enzymes that can be turned into tools than clever de novo nucleotide protein engineering.

    So you mean like the Nanomedicine books by Robert Freitas (I haven't yet read them)?
    About the discovery of ancient enzymes. Molecular biology is most definitely a treasure trove for the creation of future medical treatment methods containing stuff that we "never" could come up with ourselves. With the recent discovery of CRISPR/Cas and newer related techniques quite a "quantum leap" (in the sense of discrete not small) was made - thinking back on the low survival chances with crude methods like cloning (well this is not quite gene editing but a full swap) and the basically random point DNA insertion with older gene editing techniques. I think with more and more of the ancient stuff becoming decoded de novo nucleotide protein/peptide/peptoid/foldamer engineering (used as artificial enzyme systems A) will become more and more important. I think that in this usage case it is important to first understand "simple" examples from nature for then being able to improve upon that. There are two more possible usages for de novo foldamer engineering (foldamer being the most general case) B) as "simple" delivery vessels for drugs C) for bootstrapping advanced APM. I have a hard time to guess whether use case B is right around the corner or it will still take more than a decade to get going. What is really incredible is that the human genome is just a few gigabytes in size and still can compress so much information. If one compares that to the data size of modern operating systems it seems ridiculous. I mean the plan for how many different types of proteins and other molecules can be encoded in there ? As e metaphor the fluent passage from system design that evolved to be nicely separable and orthogonal to an completely entangled mess that contains a lot of stuff that is just there because it doesn't cause problems makes researching molecular biology like discharging an old battery - you never know how much is left. Then there's the truly random element of thermal motion not present in normal computer systems which adds another fascinating aspect. Ok I'm drifting off too far.


    ...

    >> The limits of height


    There is the interseting question about how high one can go up.


    With the capability to lift stuff high enough one could e.g. start thinking on raising the linear rail acceleration vacuum train speed to a level where it essentially becomes a propellant-less direct in orbit injection space launch systems. The space vessel is released into the atmosphere where the density is low enough such that the deceleration shock is low enough to not damage or destroy the cargo. More on that later.


    So what is the limit? This seems to be a rather hard to answer question.
    Under the assumption that with fractal truss frameworks for cell inflation buckling instabilities can be avoided scaling seems to imply that by simply keeping the mass of the internal structure of the metamaterial cells constant but spreading out the volume one can keep up with the falling density of air while also keeping up being capable of compemsating the external pressure.


    With rising volume the mass of the super thin sealing surfaces does not loose relevance. While both the mass of the displaced gas such as the mass of the outward pushing truss structure in a cell stays the same with growing volume the surface area is rising. So either it is made thinner or lifting capacity will decline. (more analysis needed)


    At some point one ends up with e.g. long trusses of single walled nanotubes (or fireproof sapphire rods) that become wobly only due to thermal vibrations alone. Or with single sheets of graphene as walls. But long before that destructive environmental factors ("forces of nature") may put a stop to ambitions.
    (Here I'd like to ask readers to please check this rough train of thought on major mistakes)


    Todays helium balloons do hit a wall at about 50km hight. They use about ~3000nm thick plastic film.
    Jaxa: http://global.jaxa.jp/article/interview/vol42/p2_e.html
    By replacing the helium fill with the most part of the shell thickness converted to internal fractal trusswork structures that resist the now occuring external pressure against the inernal vacuum one gets rid of the problem of varying internal pressure due to day night temperature variations. The other way around keeping the hight constant while the external pressure will roughly stay the same the external air density will somewhat vary with day and night - that seems less problematic.


    At a certain hight earths atmosphere begins to unmix and stratify with lighter elements higher up.
    This is called the Heterosphere. Here is a diagram:
    https://commons.wikimedia.org/…A4re_Temperatur_600km.png
    https://de.wikipedia.org/wiki/…4re_und_Heterosph%C3%A4re
    This poses an additional limit about how high one can go up.
    Its very questionable whether anything above 100km can be reached at all though.



    >> General about the Earth's atmosphere


    As a rule of thumb the air pressure in earths atmosphere halves with every 5.5km hight.
    Thus compressed down to 1bar in hypothetical weightless sapce the whole atmosphere would be about 11km thick.
    With a few exceptions it is probably impractical to put any major weight carrying stuff much above that height mark.
    Being high enough to be above the most part of the weather acticity (bottommost part of the stratosphere - like planes) may be beneficial for some applications.



    >> Propellant-less space launch system ??


    Such a thing would be a pretty dense and heavy very long perfectly circular (earth radius) tube floating at a hight where the atmospheric density is low enough that the deceleration shock on release into the atmosphere is less than 10g (How to calculate this hight for e.g. LEO speed(~8km/s) and escape speed(~11km/s)?).


    Josh Hall proposes a sequence of 80km high towers (mesopause - coldest point in the atmosphere -100°C) holding such a space launch system up. (It's rather scary imagining them crushing down). But is the pressure at 80km low enough to allow direct orbilal launch?


    A circumglobal mesosphere to thermosphere space launch corridor would even if the mass per length is kept as minimal as possible have to have a buoyancy providing enclosing lifting device of imposing diamaeter (estimated minim um at 80km: ~2km for 100kg/m; ~5km for 1000kg/m ?). This is starting to reach down into the denser parts of the atmosphere making it more like a ship swimming on the atmosphere.


    The lifting device for such a system of course would be ridiculously filigree. Its not unlikely that such ambitions will be thwarted by UV damage or micrometeorites. Wikipedia says: "The lower stratosphere receives very little UVC" but here we are higher than the ozone layer (average height of ozone layer: 15-20km tropes 20-30km - btw: stratospheric airmeshes could be used to replenish or further fortify the ozone layer) UV-B and UV-A comes through anyway. The one thing that's unproblematic is the massive availability of space precisely because nothing else is capable of staying stationary at these heights.


    As long as one does not get too far into the overpressure regime (limiting today's balloons) one may be able to extend the height limit a bit by more conventionally using a bit of lifting gasses to help. There's plenty of hydrogen available but in an oxygen rich atmosphere even when enclosed in a fireproof metamaterial this seems unsafe (true?). Both Helium and Neon are rather rather rare. It would take much time and energy to concentrate them up for lifting bigger stuff like a space launch system.
    In a hyper long term perspective one could say that concentrating up all the light noble gasses of our atmosphere is good idea since it keeps them from further depletion to outer space. About light noble gasses as a space resource could be speculated but the solar systems major helium depos Uranus and Neptune seem to have too deep gravity wells to send out anything but photons.
    Placing a vacuum balloon space launch system on Uranus or Neptune would be even more challenging due to their lightweight hydrogen atmosphere.


    As mentioned before to lift a dense and heavy objects to great heights a continuous gradient to lower density material is necessary.
    thermospheric Space launch systems would take that to the extreme.



    >> Conclusion


    As you see the amount of possibilities with this kind of technology would be enormous.


    I have an intersecting set of ideas collected on my wiki:
    http://apm.bplaced.net/w/index…ust_metamaterial_balloons


    Any feedback on those ideas?


    FIN

    >> The awesome part - for nice illustrations and dreaming about the future


    In some of the vertical "sky-strings" elevators could be integrated.
    A (pressurised) stairway to the stratosphere would be an epic multi day climb.
    Imagine the view from up there. With three point rope suspension one actually can reach any point in the sky.
    Beside the view you'll get perfect silence (and quite a bit of radiation).


    Climbing metamaterial "sky-strings" directly (assuming structure to grip on is made present) might feel like
    like standing on a a rubber air-castle. Given too much pressure the material might temporarily collapse where you stand on / where you grip it. This quickly gets more serious with rising altitude where the metamaterial becomes less and less dense (cell size grows).


    So to properly support human climbers (or other stuff like strenghtening ropes or chemomechanical power cables) proper solid structures are necessary. Albeit future devices will be very light for todays solid steel world standards these functional core structures are still heavy and dense in relation to the lifting metamaterial. So to lift the strong dense "core-structures" one has to link them to the lighter than air metamaterial. At low altitudes this might work out pretty directly (just as with current day balloons). At higher altitudes a gradient of cell size or even a fractal root net of smaller sized nonfloatong cells can softly connect to the big cells that provide negative lifting density.



    >> Transportation


    In a much smaller scale than for weather control air-meshes seem to be applicable for local urban aerial transport.


    On regards to transport I extended on the ideas presented in Josh Halls book Nanofuture:
    For reference in Nanofuture Josh Hall proposed the individualist solution without lihghter than air structures: Unthethered free moving shape shifting vessels that lift of with very very long telescoping stilts to keep downwind noise from air turbulences low. Once in the air they switch to a second sailship like mode to gain both speed and hight and once at speed they change again to a third jet like mode. He proposes "infinitesimalbearing parallel motionion cloaking". My two cents: "Adiabatic normal motion cloaking" could also be used. (I've explained both techniques above.)


    I thought about replacing the scary telescoping stilt start with safer mobile lifting pillar balloons (pillar shaped to keep space on the ground) or just static cables hangig down from the air-mesh both things would be lifting gondolas up and down to and from a rail system in the airmesh thus replacing part of the local transport with very direct congestion free gondola like transport. A form of transport that is not using the inefficient method of blowing out air for lift :S (and propulsion) but simply reaction force on the airmesh grid.


    With increasing distance it makes sense to remove the "obstacle air" altogehther.
    Airmeshes allow to put horizontal vacuum pipe "railtracks" in the air where there are no hard obstacles that make speed limiting curces necessary. Superfast "airial vacuum trains" so to say. The vacume tube could be seen as a very large unsupported vacuum-cell in the core of a very fat also vacuum filled multi celled airmesh-filament structure. For longer ranges such systems are probably best situated at the lower edge of the stratosphere 10-20km to avoid weather.


    The heavy passenger capsule drive system would be integrated in lighter than air metamaterial sausages of qite impressive diamater.
    shorter range tracks lower in the atmosphere will need a combination of tight tiedown to the ground and dynamic windload compensation sufficient for their operation speed. Longer range faster tracks can be placed in the calmer stratosphere enclosed in even more impressively sized metamaterial sausages.


    Note that in contrast to current day concepts like the hyperloop with the availability of infinitesimal bearings magnetic levitation (needing special chemical elements for the magnets) can be avoided. With the distance to the wall provided by physical contact via the infinitesimal bearings there is no rest-gas needed for air-hockey like suspension. A full vacuum is possible.



    >> Interplay with existing and future air traffic:


    Legacy air-traffic (old-timer historical kerosene driven noisemakers) should still be able to fly by sight.
    Airmeshes designers must consider that in their plans too. This may come in conflict with the desire to keep the looks of the landscape pristine for human eyes (optical cloaking).
    It is difficult to guess how much "air filaments" will be visible when designers just does not care about the looks.
    Likely appearances may be: transparent, milky, iridescent - like deep sea creatures ??


    There could be constantly open flight corridors in the mesh or the mesh could dynamical open op windsails so that vessels can move through. The sails should be able to detect punctual non wind like force and rupture in a controled reversible fashion when a plane or a bird crashes into them.



    >> anchoring density an anchoring pattern


    There are a lot of questions:
    * What would be the most practical end aesthetical mehing pattern (foam edges?)
    * What would be a good density of anchoring points on the ground in cities and on land?
    * How would one do the anchoring of an airmesh on sea?
    * What do one end up if the mesh cocept is applied to other "XYZ-spheres" (Hydrosphere, Lithosphere, Biosphere, ...)?


    ... 10,000 character limit ...

    >> Intro:


    While cycling through the cornfields I recenly had an eureka moment 8o when coming up with a really wild and crazy idea about what could be possible with underpressure based robust lighter than air metamaterial structures.


    I regularly ponder about how AP technology could be applied to solve a number of problems.
    The idea I had may solve at least three of them and opens up a whole bunch of other opportunities and interesting questions.


    The three problems solved are:


    A) The problem of keeping something stationary relative to the ground in a high up laminar large scale wind-current (e.g. CO2 collectors in the sky). This seemed to be impossible without expending energy to actively move against the current.


    B) I thought about what is likely to replace today's mostly three bladed windmills that barely scratch/tap the lowest percent of the troposphere (100m of 10km). Obviously some silent sail like air accelerator/decelerator sheets/cloth/sails should become possible.
    Future "power-windsails" may be quite a bit bigger than today's windmills but they still need to be linked to the ground for counter-force and counter-torque. To avoid excessively large bases advanced sail like wind generators probably would not be made excessively large (that is a large fraction of the 10km troposphere). Also giant towers permanently emanate the danger of them coming crushing down.


    C) I thought about extracting the potential energy from rain-droplets: Clouldn't one look at clouds as almost everywhere available catchment lakes in the sky?



    >> So here's the idea:


    Specifically what came to me was to massively employ lighter than air structures in form of aerogel like "strings/filaments" (quite thick in diameter) that are tied/anchored/thethered to the ground and also intermeshed with themselves up in the sky. In the following I will refer to those structures as aerial meshes or airmeshes or airgrids. Keeping everything held at all times. This is kind of remotely similar to the principle of machine phase in the nanocosm and it too comes with a some advantages.


    These structures seem to be easy to errect in giant scales. They could be applied for:
    * aerial traffic
    * large scale energy extraction
    * and even reversely as means for super large scale strong weather control (ozone too)


    Beside spanning "windsails" in the mesh loops of the "air grid" obviously "solar sails" are also possible.
    Also there may be rains sails whick I'll explain later.
    All sails could/should be equipped with temporary deployment capability and modes that let through part of the wind (lamellas?).



    >> Wind-loads


    Obviously one must worry about excessive windloads.


    Even uncompensated advanced materials might be able to withstand windloads (estimations needed) the floating air strings / air filaments could be armed with a dense rope in the core. Assuming a density of 4kg/dm^3 a strong rope of about 1cm diamater needs to be embedded in an lighter than air string of at least about half a meter so that it starts floating.


    To prevent getting critical loads and temporary collapse of the metamaterial due to windpressure making it temporarily non-buoyant there is the possibility of windload compensation.
    Luckily with APM there's no additional cost making the whole surface an active "living" structures.
    By integrating two other technologies windoads may be reducable to acceptible levels or even completely compoensatable.
    Conveniently when there is windload there is also local power for the protection mechanisms.
    Two main technologies usable for wind-load compensation are: (names freely invented)


    A) "infinitesimalbearing parallel motionion cloaking"
    B) "adiabatic normal motion cloaking"


    A) "infinitesimalbearing parallel motionion cloaking" (this was presented by Josh Halls in his book "Nanofuture" as a means for propulsion) When air moves parallel to a surface the surface is moved with the same speed in the same direction. This replaces friction in air with much lower friction of "infinitesimal bearings" that are integrated in the air-vessels (or here air mesh strings) topmost surface layers.


    B) "adiabatic normal motion cloaking"
    When the aforementioned technique is used the air still needs to get out of the way sidewards of an obstacle.
    While the aformentioned technology/technique can compensate for parallel air motion there still remains a motion component that is head on to the surface. Obviously this must be a motion of one period/impulse of incoming and then outgoing air in the frame of reference that is moving with the parallel motion compensation speed (I hope that formulation is sufficiently comprehensible).
    What one would try here is to "grab" pockets of air compressing them down as they approach (this heats them up so they must be kept sufficiently thermally isolated to not loose their enegry) and then expanding them up again. This technique may be capable of reducing bow waves. (Though I'm rather wary about whether this could/would work or not.)



    >> Robustness against lightning (and ice loads)


    Obviously one must worry about lightning. There seem to be two polar opposite options.


    A) Adding lightning protectors of highly conductive material. On a large scale this would probably be a bad Idea. They are likely to negatively influnece weather by quenching thunderstorms and air to ground potential in general.


    B) Making the "air-strings" electrically highly isolating (not hard for an aerogel metamaterial out of high bandgap base material).
    A thin layer of intermediately conducting water droplets that heats when lightning strikes (it converts to plasma and may damage the surface) may be avoidable by making the surfaces highly hydrophobic. As a nice side effect combined with small scale active surface movement this can also prevent any ice deposits and thus dangerously high ice loads.


    A&B) A third option is to make the structures switchable between the two extreme states.
    This may allow to extend the weather control to electric aspects of the atmosphere.


    Avoiding long stretches of electrical conductors (km scale) generally seems to be a good idea.
    By exclusively resorting to chemomechanical energy transmission one gets resillience against directly hitting solar storms (giant protuberances directly heading towards earth that would be devastating today due to induction of high voltages in long power lines) and maybe even even resilience against EMPs from not too near atomic blasts (that hopefully will never happen).



    >> Exotic untapped energy forms:


    There's a constant quite high electric field between ground and sky (aerostatic electricity).
    I don't know how much energy is in there and what would happen if large fractions of this electric reservoire where to be extracted or boosted. There's some questionable science going on there with todays pretty limited technology.
    Simple experiment:


    A little more dangerous:


    Slanted horizontal "sails" hanging below the clouds could be used like funnels guiding the rainwater to the "air-mesh-filaments" that then act like eavestroughs in the sky allowing to tap the full potential energy of rainwater. Then we wouldn't depend on a mountains with a suitable high up valley that can be blocked anymore.
    Most of the rain must be redistributed at a lower level (like a shower head in the sky - rain sails ?!) to not negatively influence vegetation. Yes that sounds ridiculous but it might make sense.



    >> The structure of the lifting metamaterial


    For furter discussion of the limits of the technology I need to go a little more into the detail of the structure of the lifting metamaterial. These ultra light metamaterials are made out of cells with thin gas-tight walls and internal 1D trusses (possibly fractaly arranged) that prevent collapse from external pressure. Advanced surface functionalies of the airmesh strings are not located on every cell wall but on the outermost walls of a "sky string" or independent balloon. These outermost surface functionalities are not part of the base metamaterial. The "sky strings" have many basic cells throughought their diamater. The main function of the walls of each cell is just gas exclosure. This compartmentalisation that is finer grained than the whole air string gives some redundance and safety. If the metamaterial is made out of an uncombustible base material like sapphire then there is little to no chance that these structures come crushing down. Nice! The internal trusswork might be equipped with active components to adjust cell sizes a bit such that buoyancy can be adjusted. Too much buoyancy is bad too because of too much upward pulling force on the anchor points.


    ... 10,000 character limit ...

    I for one am regularely checking in here to see if theres anything new.
    And I'll will continue to do so.


    What kept me from posting?


    1) I was visiting the first ever Makerfaire in Vienna Austria showing of my collection of 3D prints.
    I also made a lot of graphical Infosheets for A4 flipcharts about 3D printing and APM.


    2) I tried to keep myself up to date with cutting edge new high level stateless interactive programming methodologies (applicative functional reactive programming) since I think this will be of paramount importance for 3D modelling the future reality (elm, purescript, GPU stuff, ...).
    Actually the programmatic 3D modelling software I currently use (OpenSCAD) puts a major pressure of suffering onto me since with its lack of higher order functions it does not allow me to create higly reusable libraries (specifically I hit a wall with gears & threads). This stops a lot of other ambitions in its tracks. Stuff that depends on gears and threads which obviously is a lot.


    3) What I also did was documenting a first draft of an idea I had regarding macroscopic self replication.
    I've published it here:
    http://reprap.org/wiki/RepRec
    (It depends on gears and threads :S)
    I think a working self replication pick & place robot capable of performing practical tasks may make the concept of exponential assembly be perceived as a more plausible thing.
    Even though in the makrocosmos theres gravity but no sticking force and in the nanocosm its the other way around (actually there is gravity but its overpowered by thermal noise) and thus macroscopic block based self replication actually only partially overlaps with nanoscopic block based self replication.
    The the existing approaches of self replication pick and place robots like this one:
    http://rpk.lcsr.jhu.edu/wp-con…ses13_An-Architecture.pdf (Matt Moses et al)
    Use too way too view preproduced base part types. This is making them clunky and totally impractical e.g. for automated 3D-Printer-assembly. Also existing approaches use clipping or friction (including friction in screws) for assembly.
    This makes large systems out af many small parts unnecessary lacking stiffness.
    I think that using the principle of reinforcement like in concrete construction is the right way to go.
    Just making diversely profiled short modular rod segments with channels going through where rebars can be fed through that themselves are comosed out of modular short segmented chains.
    Recently there came out a new model of self replicating 3D printer: http://dollo3d.com/
    I think the herringbone drive method is a good solution the mounting method namely friction plugs not so much.


    Here's some other stuff of of what I was up to lately:
    * I'm still steadily extending my APM wiki.
    * I just barely started moving the highly graphical german language presentation stuff I have lying around into the english wiki (I need to focus more on that)
    * I still didn't get around to making those youtube videos. I regulary think about them.
    * I still didn't get around to making really nice drawn illustrations.
    (I know that I potentially do have that drawing skill level)
    * I was watching some videos about google tango google daydream and a bit off deep learning.


    ----


    About moelcular biology of the cell: I once visited a two semester course about the topic.
    Pretty interesting stuff - the things that suprised me most where:
    * that the transcription from DNA to t-RNA and from t-RNA to proteins happens in a pretty parallel fashion. It looks like a sparse feather.
    * that the visalisation pictures of cell membranes are basically all very wrong showing way to much lipid layer and way too few proteins going through and showing the size ratios very wrong too.
    * the character of the chain of energy transport with something like waiting position control points
    * the crazy density of endoplasmic reticulum in the cell not clogging up all the transport
    * the effect of compartment dimensionality on diffucion transport.
    * the rich RNA world beside the Protein world


    It seems that only a tiny fraction of molecular biology is really directly applicable to even the early bioinspired stages of advanced APM bootstrapping. I think It'll still take some more time that topic dedicated resources and courses are made.


    You seem to have read a lot about history too. This not a place where I usually would spoop around since it is even further away from advanced APM than molecular biology. Since I'm unlikely to read this set of literature if you've found/find something that may suprisingly be applicable specifically to bootstrapping APM dont hold back telling us here. Even if its just a hunch.


    ----


    Awesome that you now have got a 3D printer :)
    I do have an Ultimaker original (one of the very earliest batch)
    You say that you'd like to "design and build some nanotech related tools".
    Do you have anything specific in mind?
    I do have made quite a set of APM principle demonstration objects by now.
    (I need to post a picture)


    >> "It's amazing sometimes how much time one expends on the allegedly "trivial" aspect of mounting and arranging parts in some apparatus."
    Yes 3D printing can be very time consuming. I usually spend much more time designing than printing.
    Luckily my printer is in a state where it is operating without a hitch almost all of the time so this is not a place where I loose much time anymore.


    ----


    PS: I have at least two further major post for the forum in the pipeline.


    PPS: I don't really know what keeps others from posting.
    I think I vaguely know what Eric Drexler is up to right now:
    With molecular sciences more and more on the right track and the recent advances in machine learning (deep learning / deep dream)
    I think Drexler has switched his main forcus to artificial intelligence (tensorflow & stuff?).

    Zitat von Jim Logajan

    I'm probably missing something, but I don't see how one can draw any inference about future tools from the first set of invented tools for molecular carbon mechanical synthesis. I skimmed sections 13.3.7, 13.3.8, and 8.5.2 of Nanosystems and it does not look to me like there are any dangers. Even if none of the released binding energy were stored for later re-use, the universe is awash in thermonuclear energy. I think it is something of a technical oddity that so much of it is temporarily inaccessible. Energetically inefficient nanotech production would still be, on a relative scale, far more efficient than current tech production.

    Ok, I see that I missed some important points in my initial post.


    1)
    I do not infer a limitation on future toolsets based on this early toolset-feasability analysis.
    In fact I try to show the contrary. I reckon that the "proven" possibility for energetically reversible mechanosyntheis should imply the possibility of bond-topological reversible mechanosynthesis.
    So that it even makes sense to talk about bond-topological reversible mechanosyntheis (dis-assembly) - (establishing discussion basis)


    The point I'm trying to make is that in a competitive market early attempts are unlikely to wait with production till recycling is perfectly figured out.
    And that bond-topological reversible mechanosynthesis (by coupling separate mechanosynthetic reactions together in the background and applying the illustrated principle) looks to be a lot more difficult to archive than bond-topological irreversible mechanosyntheis. There's the additional difficulty that I haven't mentioned yet that one has to do more than the easier open-loop-control to take stuff apart that has been damaged (radiation,heat,...).


    In short what I am really worried about is the temporal sequence of development.
    In the early development stages (DNA, proteins) bond-topological-irreversibility is irrelevant since bio-organisms can do the recycling for us. But when we start to arrive at the diamondoid stuff (swimming like plastic or even floating in the air) and arrive at high production volumes before the cleanup is fully figured out we might be in for trouble - damaging nature.


    I have no clue how much influence we will have on the temporal sequence of development - I'd guess rather little.
    What I think should be easier to archive than bond-topological reversible mechanosynthesis is making nanosystems of many
    reusable small parts instead of fusing them together to a single monolithic crystal.
    This way diamondoid products can at least by recycled to themselves for a while.


    2)
    I do not think high energy consumption prevents recycling.
    Actually the contrary. The fact that the atom by atom assembly step is the most energy consuming (because of most surface area not because any lack of efficiency) gives a strong incentive to make bigger parts ("crystolecules") reusable. That is: not fuse them all together to a single macroscopic block. Also production by recycling of "crystolecules" (~<32nm) or even bigger "microcomponents" (~<1um) should be much faster since less waste heat has to be removed.


    X)
    I think there's a recurring pattern in history that stuff gets produced in masses when it can't yet be disposed of and that it then produces problems due to its piling up. Side note: Beside human civilization also nature provides such examples albeit on much bigger timescales. Examples are "the great oxygenation event" and "the lignin catastrophy" (less known)
    I think the widespread belief (under the ones that even know about APM) that nanofactories are except from that pattern might cause problems. Blindness for the possible danger waste. What will really happen all depends at which capabilities we arrive when.


    I think in any case we'll need to have waste management guidelines.
    I'm collecting info about recycling on my wiki:
    http://apm.bplaced.net/w/index.php?title=Recycling


    ---------------------



    Zitat von lsuess

    Can one imply from energetically reversibility to bond-topological reversibility?
    ....
    I think to find an answer to this question is highly relevant for recycling (in the advanced end of the technology spectrum)


    I wrote nonsense there. - Your comment helped me to reformulate what I really meant:
    How much effort will it take to archive bond-topological reversibility?
    Given that the possibility of energetically reversibility should imply the possibility of bond-topological reversibility.
    I think to find an answer to this question is highly relevant for recycling (in the beginning of the advanced end of the technology spectrum)

    Zitat von Jim Logajan

    I'm afraid I don't know what the image is supposed to be showing me. None of the axis are labeled - do you have some additional context or discussion somewhere?


    I made an annotated version now.
    (I've attached the original lossless vector graphics (*.svg) in zipped form -- *.svg not allowed)

    In this video ( "Mechanosynthesis - Ralph Merkle & Robert Freitas" )
    R.Freitas says that you don’t take mechanosynthesized stuff apart again.
    See here: (Jump forward to 47:42)


    Guy in audience: "If I place a germanium incorrectly which tool do I use to get it off."
    R.Freitas "You don't."



    So the Set described in the there discussed tooltip paper
    ( http://www.molecularassembler.com/Papers/MinToolset.pdf )
    is not reversible in the bond-topology-state.
    The set was also not simulated in a way that maximizes reversibility in energy but instead in a way that
    makes it somewhat reliable at 300K (E_react = 0.40 eV gives P_react = 2*10^-7)
    and extremely reliable at 80K (E_react = 0.40 eV gives P_react = 5*10^-26 at)
    But most importantly the reactions where considered standalone and uncoupled to others.



    According to E.Drexler mechanosynthesis can be made to archive very high levels of energy reversibility:
    Nanosystems 13.3.7.b.
    ... reliable mechanochemical operations can in some instances approach thermodynamical reversibility in the limit of slow
    motion. ... ... The conditions for combining reliability and near reversibility are, however, quite stringent: reagent moieties must on encounter have structures favouring the initial structure, then be transformed smoothly into structures that, during separation, favour the product state by ~ 145 maJ (to meet the reliability standards assumed in the present chapter). ...



    * "smoothly" I think means forces times movements must be captured in the machine phase background. Holding against pulling force - preventing ringing snapping.
    * Furthermore I think that one needs to couple multiple reactions with E_react-one<<kB*T energy-loss per deposition/abstraction
    together to E_react-all>kB*T as a whole to prevent the single reactions from running backwards.


    I made a 3D model for visualizing the qualitative progression of the energy wells that is necessary for a energetically reversible mechanosynthetic operation. This model is quantitatively disconnect from any particular physical process like e.g. hydrogen abstraction.
    http://apm.bplaced.net/w/index…nosynthesis_principle.jpg


    The question is:
    Can one imply from energetical reversibility to bond-topological reversibility?



    Surely it seems difficult to rip out a carbon from the centre of a flat diamond say 111 surface.
    But if the atomically flat plane does not have macroscopic size one can start from the edges where less than three of four bonds are inaccessible. Astoundingly there was an AFM experiment conducted where on an atomically flat surface buried tin atoms where controllably flipped with silicon atoms and vice versa (surface-to-tip).
    https://www.uam.es/gruposinv/s…gy_4_803_Custance_AFM.pdf
    They used a lot of tapping and akin to what E.Drexler describes as "conditional repetition"



    I think to find an answer to this question is highly relevant for recycling (in the advanced end of the technology spectrum)
    The official nano-factory video says something like "the only waste products are clean water clean air and heat".
    But what about the product itself once its microcomponents become obsolete?



    If mechanosynthesis can't be made bond-topologically reversible from the early on start the only way to get rid of obsolete versions would be by:
    * burning them - only possible if they don’t form slack due to incorporated Si,Al,Ti,...
    * dissolving them (Sodium beam treatment, Acids, ...)
    If even that will not be done we might sink deeply into diamondoid waste.



    I think that might be the most severe and most overlooked danger of APM.

    Many thanks for migrating all the posts from the beehive forum to wotlab forum for better maintainability. All the posts seem well preserved :) - perfect job.
    Also I'm delighted to see that you've implemented my suggestions for sub-forum topics.


    Now everything is ready for a great year 2016 :)

    >>... Also, the underlying hardware operates imperatively (it has state that changes with time) so there is a mismatch between the declarative notation and what is actually occurring on the machine. ... <<

    Strong objection!!

    In advanced sensible target nanosystem (excluding early slow diffusion based nanosystems e.g. DNA)
    The lowest parts of the underlying hardware needs to be near reversible to prevent excessive heating.
    And the needed *reversible low level logic has NO inherent state that changes with time!*

    To elaborate on that:
    All the internal apparent state (I'll call it pseudo-state) is completely predetermined by the starting state (I'll call this genuine-state) which is located at a higher level. This is the case because the bijective transformation steps (which define reversibility) allow no branching in "foreward" or "backward" direction of execution steps. The internal pseudo-state can appear big (in memory usage) to an in relation small external genuine state because the pseudo-state is just decompressed genuine-state. Decompression introduces no additional information (state). Since stretches of low level reversible computation are as shown stateless they are pure functions and *inherently functional*!

    About the length distribution of reversible stretches: (granularity and upreach):
    To save a maximum amount of energy one needs to cover the lowest HW level with many long stretches of reversible computation. Accomplishing that shouldn't be a big problem at the lowest cores of a nanofactory where you have the rather simple problem of churning out a very great number of identical standard parts via simple open loop control. Further up in the the physical assembly hirachy it might become more interesting with richer part composing situations and more complex nano- to micro-logistics - more on that later. It is possible to composably program long and big lowest level reversible computation stretches (obviously they are not monolithic). It will be done and it necessarily is purely functional - otherwise reversibility would be destroyed. There is some research about reversible assembly languages - I currently can't guess wether those will or won't be programmed "by hand".

    ---- An alternate approach:

    I have another way to show why I think its unsurprising that low level hardware is most often perceived as inherently stateful although this is wrong. For this I'll need to briefly describe a maybe (??) barely known concept that is IMO very important:

    The concept of *Retractile Cascades* (as I understand them):
    Legend: X,Y,Z
    ... same number of arbitrary bits - words bytes whatever but equal
    ... X's dont have mutual equal content so have Y's and Z's

    When computing reversibly (e.g. with rodlogic)
    1a) (X+YY+YYYY+...) starting from the input X first grow in the addition of used memory space per computing step as far as necessary (In rod logic every step corresponds to pulling all rods of an evaluation stage)
    1b) (X+YY+YYYY+YYYYYYYY+YYYY+YY+Z)=M then shrink the still memory usage increasing steps back down till a small desired result Y is reached in the last step.
    1ab) Overall there is monotonous growth in used memory space - first fast then slow.
    2) (Y':=Y) make an imperative target destructive copy from the output Y. This causes some waste heat but not too much. Caution: Y's information content (entropy) and memory space usage are distinct things.
    3ab) M-Z-YY-YYYY-YYYYYYYY-YYYY-YY=X finally the original result Y at the end of the cascade can be reverse executed ("retract the cascade") to free the used (garbage filled) memory space for the next computation in a (near) dissipation free manner. The cascades input X is then ready for an imperative destructive update that starts the next cycle.

    So basically a retractile cascade is a stretch of reversible commutation optimized to save energy.

    Now - to show why this can seem imperative - to show why pseudo-state may seem like genuine-state - such a retractile cascade can be visualized as a directed acyclic graph that is depicting the mutual dependencies of the memory cells. It starts at a root input node branches out and then merges back to a single output node. If one crops out a patch from the centre of this graph and asks how a particular value/bit emerges at a particular node inside this patch while only having the cropped out piece of the graph available for reconstruction one needs a lot of genuine-state on the edges of the cropped out square namely all the places were the incoming edges (or the outgoing) cross the border of the patch. If the observed context (cropped patch) is too small stuff that is actually functional stuff appears to be imperative stuff. The other way around: If you have sufficient knowledge to move your horizon of perception farther outward, more of the true functional nature of seemingly imperative stuff becomes visible.

    I think because of this often unavoidable limited context tunnel view combined with the fact that energy saving reversible logic is still a thing of the future is one of the main reasons why low level hardware is likely to be mistaken to be inherently stateful.

    (analysis->design) For the actual design of reversible computation (instead of the here done analysis) one *needs* sufficient horizon to become functional and thus reversible and efficient. Curiously and luckily its possible to built up this big horizon from small functional building-blocks.

    The abstract gist I see here is that: *statefulness is a relative property*
    since the border between genuine-state and pseudo-state is movable by changing the context.
    genuine-state == information of unknown cause to the observer
    pseudo-state == decompressed information of known cause to the observer
    The cause is the compressed input information.

    ---- Leaving the reversible realm:

    Genuine-state, destructive updates and random number generators( RNGs) are undoubtably necessary at some point.
    So did I just shift the mismatch problem upwards and draw a picture of a "functional-imperative-functional burger"? I'am not so sure about that.

    The lowest level occurrence of those troublemakers is at the places where the stretches of reversible computation connect. As mentioned before at those connection places some genuine-state information is located. This is information that contains some decisions by practicality "config-axiom-variables". But state constants are actually functional (pure functions are constants too) the real issue are irreversible operations. When going beyond stretches of reversible computation like retractile cascades what does it mean to include irreversible non-bijective operations? From an analytic perspective: What if the observed context grows big enough to enclose irreversible joins (deletions and destructive updates) or forks (spawned randomness)?

    Joins - while not reversible any-more - can remain functional (same outputs on same inputs). This is often seen in functional libraries that seal imperative code. By carefully packaging deletions and destructive updates up into a functional interface one can restore the functional code maintainability benefits and carry them upward to a higher code level (nestable & scalable). Something similar is often seen in haskell libraries but there often rock bottom lowest level imperative code is isolated that would be unsuitable because of way too high density of irreversible updates. Today using and hiding destructive updates seems reasonable in all situations.

    Forks are actually difficult to create. For deterministic PRNGs there always can be found a context which shows that there actually are no forks. TRNGs (quantum random and physical noise) seem to be truly unisolatable forks. For all practical purposes they seem to introduce absolute genuine-state and thus they may be the one single exception to the relativity of statefulness.

    Since longer reversible stretches are desirable the connection points of stretches of reversible computation do not lie at rock bottom but at at a higher level. On a even higher level than this higher level namely on the level of multiple joined stretches of purely reversible computation it is yet rather unclear what ratio of reversible to irreversible steps is to be expected (pervasiveness of irreversibility). In an advanced nanofactory The reversible hardware base will reaches to the "height" where the efficiency argument looses its grip. If that is high enough the software maintainability argument might kick in before the efficiency argument runs out. Then there'll be no space for an imperative layer in the aforementioned "programming style burger" any-more.

    >> ... so while I understood the concept, it was not easy to figure out how to express the same program declaratively. ...<<

    I had that exact same experience.
    Namely with maze generation algorithm and a small bomberman game.
    Both seemingly inherently stateful.

    I think the answer to that long standing problem is:
    A.) usage of modern functional data-structures and
    B.) usage of modern functional programming capable of handling interleaved IO

    ad A.) There are already libraries for data-structures with O(1) random access and cheap non-destructive in place updates implemented by diffing. Haskells (awfully named) vector library is an example.

    There's the common critique of slowness due to fine grained boxed data-structures.
    Today this is solved by workarounds (aforementioned sealed imperativity in libraries for functional languages)
    But I'd guess that at the microprocessor level of advanced nanofactories (not rock bottom) there'll be some architecture optimized for functional language execution that circumvent the so called "Von Neumann bottleneck". Today there exists a cool demo running on FPGA hardware called "Reduceron".
    https://www.doc.ic.ac.uk/~wl/i…ts/papers/reduceron08.pdf
    It says game changing performance
    https://github.com/tommythorn/Reduceron

    ad B.) first order functional reactive programming (FRP 1.ord)
    This I just recently encountered with the "elm" language - It blew me away.


    ------------------------------- Performance:

    Regarding lower level:
    >>... For operations requiring real-time responses, such as nano-systems operating in-situ, imperative programming may still be the only realistic choice. ...<<

    Most low level stuff in nanofactories will probably be dead simple open loop control.
    Strong determinism of functional languages is a good basis for reliable systems.
    Nonetheless I guess I need to read up about this a bit.

    Regarding higher level:
    >> ... my understanding is that it is difficult to get predictable performance from existing implementations. ... <<

    This is often mentioned in light of lazy evaluation.
    Lazy evaluation is not inherent to but made possible by functional programming.
    (non-strict -> choose best from both worlds)
    I personally do not have practical experience with laziness in big projects.
    I weren’t running into very many complaints about it.
    Here's Some low noise commentary on this:
    https://www.quora.com/Is-lazy-…in-Haskell-a-deal-breaker

    I feel like there has been a lot of improvement over over the last years in existing implementations (languages + libraries). By now there are (at least for haskell) many pre-built libraries for (both lazy and strict) purely functional data-structures with known Landau complexities for time and space (amortized or worst case). Those data-structures contain all the clever and or dirty work needed to avoid the usual inefficiencies and space-leaks of naive implementations.
    There are quite a view efficiency demos for functional languages (not sure how objective the spectrum is). Especially intensive number crunching with non-trivial parallelism (multi-core not GPU) is said to be way easier to program with functional language enabled "software transactional memory".


    ------------------------------- Usage:
    >>... That project lasted about a year, after which I did not encounter any use of declarative languages.<<

    Many people in the 3D printing maker community (including me) are using "OpenSCAD" a declarative purely functional programming language (with C like syntax). In fact I do my 3D modelling work almost exclusively with it. The restriction to the description of static non interactive objects makes it very different from "normal programming" though. A nanofactory is a lot about 3D modelling.

    Constructive solid geometry can be made incredibly elegant in functional languages.
    I did an experiment here:
    http://www.thingiverse.com/thing:40210
    Then there's the super powerful lazy infinite multidimensional automatic differentiation method invented by Conal Elliott - very useful for gradients normals curvatures and whatnot in 3D modelling - (taylor-series).
    This and other bleeding edge stuff is AFAIK integrated in the haskell 3D modelling program "implicitCAD"
    written mainly by Christopher Olah.
    Sadly there are two major problems. One: its still horribly hard to install which brings me to the point of dependency hell. There's the functional package manager Nix - another example for practical application of functional programming. And two: This one is actually too complicated to go into here.

    On an other side there is functional reactive programming:
    Conal Elliott again:
    https://www.youtube.com/watch?v=faJ8N0giqzw
    With first order functional reactive programming (elm - designed by Evan Czaplicki) interactive programming seem to become a breeze - actually it promises to be easier than in imperative languages.
    That should open up the usage space quite a bit.



    ------------------------------- Collected side-notes:
    * Reversible actuators:
    Bottommost reversible hardware includes not only reversible logic but also reversible low level mill style actuators for the mechanosynthesis of standard-parts.
    * Motivating:
    Even in the reversible retractile cascade stretches some irreversibility needs to be added in the clock to give the nanofactory minimal but sufficient motivation to move in the right direction - changing pseudo-future into to genuine-future.
    * Pipe-lining:
    Unfortunately retractile cascades seems to blocks pipe-lining quite a bit (like Konrad Zuses mechanical four-phase pipe-lining in the Z1 purely mechanical Von Neumann computer). It probably comes down to a trade-off between dissipation and speed.
    * Unwanted (?) Freedom:
    Looking again at a point in the dependency graph one can create a partitioning very akin to the light-cone - the "dependency cone". One can find an area with nodes that are neither in the psuedo-past nor in the pseudo-future of the analysed node. In an actual implementation all those non-interacting nodes must be shifted to relative past, present or future. Thus there is some freedom of asynchronicity. Additional state is needed to fix this free undefined parts of a stretch of reversible computation. The obvious choice to fix this is to use a synchronizing clock. After that one can look at all the pseudo-state slices of the reversible-computation-stretch-pure-function by a one dimensional slider. Inside a retractile cascade (with the clock included!) there are no rotating wheels reciprocating rods or other parts that move freely (that is thermally agitated). Everything is connected. Thus the whole intermeshed nano-mechanic system has only one single degree of freedom. The whole process is fully deterministic.
    * Reversible computing:
    bijective mapping ->
    no spread in state space neither in pseudo-future nor in preudo-past direction ->
    constant entropy -> no arrow of time -> no real future and real past
    In contrast to imperative stuff which introduces the situation where you "split reality"
    Y-join: bit deletion (overwriting) - (possibly?) multiple pasts - system entropy decreases
    Y-fork: random bit - multiple futures - system entropy increases
    * (Evaluation stages in retractile cascades do not contain equal information/entropy but the snapshots of the whole Cascade between stage evaluation steps do. -- entropy(output)/entropy(input)=?? )