Posts by lsuess

    The reason I think that Eric Drexler has switched his focus is this video of a somewhat recent talk he gave:
    Eric Drexler - A Cambrian Explosion in Deep Learning
    Filmed at the Free and Safe in Cyberspace conference in Brussels in Sept 2015


    Also with Radical Abundance written and published I think a major load is of his shoulders.

    My interest in the subject is more as an area of problems that can be better attacked by nano robots than as a source for bootstrapping nanotechnology. My impression is that most of the advances in things like DNA and RNA manipulation (e.g. CRISPR/Cas9) appear to be due to discoveries of ancient enzymes that can be turned into tools than clever de novo nucleotide protein engineering.

    So you mean like the Nanomedicine books by Robert Freitas (I haven't yet read them)?
    About the discovery of ancient enzymes. Molecular biology is most definitely a treasure trove for the creation of future medical treatment methods containing stuff that we "never" could come up with ourselves. With the recent discovery of CRISPR/Cas and newer related techniques quite a "quantum leap" (in the sense of discrete not small) was made - thinking back on the low survival chances with crude methods like cloning (well this is not quite gene editing but a full swap) and the basically random point DNA insertion with older gene editing techniques. I think with more and more of the ancient stuff becoming decoded de novo nucleotide protein/peptide/peptoid/foldamer engineering (used as artificial enzyme systems A) will become more and more important. I think that in this usage case it is important to first understand "simple" examples from nature for then being able to improve upon that. There are two more possible usages for de novo foldamer engineering (foldamer being the most general case) B) as "simple" delivery vessels for drugs C) for bootstrapping advanced APM. I have a hard time to guess whether use case B is right around the corner or it will still take more than a decade to get going. What is really incredible is that the human genome is just a few gigabytes in size and still can compress so much information. If one compares that to the data size of modern operating systems it seems ridiculous. I mean the plan for how many different types of proteins and other molecules can be encoded in there ? As e metaphor the fluent passage from system design that evolved to be nicely separable and orthogonal to an completely entangled mess that contains a lot of stuff that is just there because it doesn't cause problems makes researching molecular biology like discharging an old battery - you never know how much is left. Then there's the truly random element of thermal motion not present in normal computer systems which adds another fascinating aspect. Ok I'm drifting off too far.


    ...

    >> The limits of height


    There is the interseting question about how high one can go up.


    With the capability to lift stuff high enough one could e.g. start thinking on raising the linear rail acceleration vacuum train speed to a level where it essentially becomes a propellant-less direct in orbit injection space launch systems. The space vessel is released into the atmosphere where the density is low enough such that the deceleration shock is low enough to not damage or destroy the cargo. More on that later.


    So what is the limit? This seems to be a rather hard to answer question.
    Under the assumption that with fractal truss frameworks for cell inflation buckling instabilities can be avoided scaling seems to imply that by simply keeping the mass of the internal structure of the metamaterial cells constant but spreading out the volume one can keep up with the falling density of air while also keeping up being capable of compemsating the external pressure.


    With rising volume the mass of the super thin sealing surfaces does not loose relevance. While both the mass of the displaced gas such as the mass of the outward pushing truss structure in a cell stays the same with growing volume the surface area is rising. So either it is made thinner or lifting capacity will decline. (more analysis needed)


    At some point one ends up with e.g. long trusses of single walled nanotubes (or fireproof sapphire rods) that become wobly only due to thermal vibrations alone. Or with single sheets of graphene as walls. But long before that destructive environmental factors ("forces of nature") may put a stop to ambitions.
    (Here I'd like to ask readers to please check this rough train of thought on major mistakes)


    Todays helium balloons do hit a wall at about 50km hight. They use about ~3000nm thick plastic film.
    Jaxa: http://global.jaxa.jp/article/interview/vol42/p2_e.html
    By replacing the helium fill with the most part of the shell thickness converted to internal fractal trusswork structures that resist the now occuring external pressure against the inernal vacuum one gets rid of the problem of varying internal pressure due to day night temperature variations. The other way around keeping the hight constant while the external pressure will roughly stay the same the external air density will somewhat vary with day and night - that seems less problematic.


    At a certain hight earths atmosphere begins to unmix and stratify with lighter elements higher up.
    This is called the Heterosphere. Here is a diagram:
    https://commons.wikimedia.org/…A4re_Temperatur_600km.png
    https://de.wikipedia.org/wiki/…4re_und_Heterosph%C3%A4re
    This poses an additional limit about how high one can go up.
    Its very questionable whether anything above 100km can be reached at all though.



    >> General about the Earth's atmosphere


    As a rule of thumb the air pressure in earths atmosphere halves with every 5.5km hight.
    Thus compressed down to 1bar in hypothetical weightless sapce the whole atmosphere would be about 11km thick.
    With a few exceptions it is probably impractical to put any major weight carrying stuff much above that height mark.
    Being high enough to be above the most part of the weather acticity (bottommost part of the stratosphere - like planes) may be beneficial for some applications.



    >> Propellant-less space launch system ??


    Such a thing would be a pretty dense and heavy very long perfectly circular (earth radius) tube floating at a hight where the atmospheric density is low enough that the deceleration shock on release into the atmosphere is less than 10g (How to calculate this hight for e.g. LEO speed(~8km/s) and escape speed(~11km/s)?).


    Josh Hall proposes a sequence of 80km high towers (mesopause - coldest point in the atmosphere -100°C) holding such a space launch system up. (It's rather scary imagining them crushing down). But is the pressure at 80km low enough to allow direct orbilal launch?


    A circumglobal mesosphere to thermosphere space launch corridor would even if the mass per length is kept as minimal as possible have to have a buoyancy providing enclosing lifting device of imposing diamaeter (estimated minim um at 80km: ~2km for 100kg/m; ~5km for 1000kg/m ?). This is starting to reach down into the denser parts of the atmosphere making it more like a ship swimming on the atmosphere.


    The lifting device for such a system of course would be ridiculously filigree. Its not unlikely that such ambitions will be thwarted by UV damage or micrometeorites. Wikipedia says: "The lower stratosphere receives very little UVC" but here we are higher than the ozone layer (average height of ozone layer: 15-20km tropes 20-30km - btw: stratospheric airmeshes could be used to replenish or further fortify the ozone layer) UV-B and UV-A comes through anyway. The one thing that's unproblematic is the massive availability of space precisely because nothing else is capable of staying stationary at these heights.


    As long as one does not get too far into the overpressure regime (limiting today's balloons) one may be able to extend the height limit a bit by more conventionally using a bit of lifting gasses to help. There's plenty of hydrogen available but in an oxygen rich atmosphere even when enclosed in a fireproof metamaterial this seems unsafe (true?). Both Helium and Neon are rather rather rare. It would take much time and energy to concentrate them up for lifting bigger stuff like a space launch system.
    In a hyper long term perspective one could say that concentrating up all the light noble gasses of our atmosphere is good idea since it keeps them from further depletion to outer space. About light noble gasses as a space resource could be speculated but the solar systems major helium depos Uranus and Neptune seem to have too deep gravity wells to send out anything but photons.
    Placing a vacuum balloon space launch system on Uranus or Neptune would be even more challenging due to their lightweight hydrogen atmosphere.


    As mentioned before to lift a dense and heavy objects to great heights a continuous gradient to lower density material is necessary.
    thermospheric Space launch systems would take that to the extreme.



    >> Conclusion


    As you see the amount of possibilities with this kind of technology would be enormous.


    I have an intersecting set of ideas collected on my wiki:
    http://apm.bplaced.net/w/index…ust_metamaterial_balloons


    Any feedback on those ideas?


    FIN

    >> The awesome part - for nice illustrations and dreaming about the future


    In some of the vertical "sky-strings" elevators could be integrated.
    A (pressurised) stairway to the stratosphere would be an epic multi day climb.
    Imagine the view from up there. With three point rope suspension one actually can reach any point in the sky.
    Beside the view you'll get perfect silence (and quite a bit of radiation).


    Climbing metamaterial "sky-strings" directly (assuming structure to grip on is made present) might feel like
    like standing on a a rubber air-castle. Given too much pressure the material might temporarily collapse where you stand on / where you grip it. This quickly gets more serious with rising altitude where the metamaterial becomes less and less dense (cell size grows).


    So to properly support human climbers (or other stuff like strenghtening ropes or chemomechanical power cables) proper solid structures are necessary. Albeit future devices will be very light for todays solid steel world standards these functional core structures are still heavy and dense in relation to the lifting metamaterial. So to lift the strong dense "core-structures" one has to link them to the lighter than air metamaterial. At low altitudes this might work out pretty directly (just as with current day balloons). At higher altitudes a gradient of cell size or even a fractal root net of smaller sized nonfloatong cells can softly connect to the big cells that provide negative lifting density.



    >> Transportation


    In a much smaller scale than for weather control air-meshes seem to be applicable for local urban aerial transport.


    On regards to transport I extended on the ideas presented in Josh Halls book Nanofuture:
    For reference in Nanofuture Josh Hall proposed the individualist solution without lihghter than air structures: Unthethered free moving shape shifting vessels that lift of with very very long telescoping stilts to keep downwind noise from air turbulences low. Once in the air they switch to a second sailship like mode to gain both speed and hight and once at speed they change again to a third jet like mode. He proposes "infinitesimalbearing parallel motionion cloaking". My two cents: "Adiabatic normal motion cloaking" could also be used. (I've explained both techniques above.)


    I thought about replacing the scary telescoping stilt start with safer mobile lifting pillar balloons (pillar shaped to keep space on the ground) or just static cables hangig down from the air-mesh both things would be lifting gondolas up and down to and from a rail system in the airmesh thus replacing part of the local transport with very direct congestion free gondola like transport. A form of transport that is not using the inefficient method of blowing out air for lift :S (and propulsion) but simply reaction force on the airmesh grid.


    With increasing distance it makes sense to remove the "obstacle air" altogehther.
    Airmeshes allow to put horizontal vacuum pipe "railtracks" in the air where there are no hard obstacles that make speed limiting curces necessary. Superfast "airial vacuum trains" so to say. The vacume tube could be seen as a very large unsupported vacuum-cell in the core of a very fat also vacuum filled multi celled airmesh-filament structure. For longer ranges such systems are probably best situated at the lower edge of the stratosphere 10-20km to avoid weather.


    The heavy passenger capsule drive system would be integrated in lighter than air metamaterial sausages of qite impressive diamater.
    shorter range tracks lower in the atmosphere will need a combination of tight tiedown to the ground and dynamic windload compensation sufficient for their operation speed. Longer range faster tracks can be placed in the calmer stratosphere enclosed in even more impressively sized metamaterial sausages.


    Note that in contrast to current day concepts like the hyperloop with the availability of infinitesimal bearings magnetic levitation (needing special chemical elements for the magnets) can be avoided. With the distance to the wall provided by physical contact via the infinitesimal bearings there is no rest-gas needed for air-hockey like suspension. A full vacuum is possible.



    >> Interplay with existing and future air traffic:


    Legacy air-traffic (old-timer historical kerosene driven noisemakers) should still be able to fly by sight.
    Airmeshes designers must consider that in their plans too. This may come in conflict with the desire to keep the looks of the landscape pristine for human eyes (optical cloaking).
    It is difficult to guess how much "air filaments" will be visible when designers just does not care about the looks.
    Likely appearances may be: transparent, milky, iridescent - like deep sea creatures ??


    There could be constantly open flight corridors in the mesh or the mesh could dynamical open op windsails so that vessels can move through. The sails should be able to detect punctual non wind like force and rupture in a controled reversible fashion when a plane or a bird crashes into them.



    >> anchoring density an anchoring pattern


    There are a lot of questions:
    * What would be the most practical end aesthetical mehing pattern (foam edges?)
    * What would be a good density of anchoring points on the ground in cities and on land?
    * How would one do the anchoring of an airmesh on sea?
    * What do one end up if the mesh cocept is applied to other "XYZ-spheres" (Hydrosphere, Lithosphere, Biosphere, ...)?


    ... 10,000 character limit ...

    >> Intro:


    While cycling through the cornfields I recenly had an eureka moment 8o when coming up with a really wild and crazy idea about what could be possible with underpressure based robust lighter than air metamaterial structures.


    I regularly ponder about how AP technology could be applied to solve a number of problems.
    The idea I had may solve at least three of them and opens up a whole bunch of other opportunities and interesting questions.


    The three problems solved are:


    A) The problem of keeping something stationary relative to the ground in a high up laminar large scale wind-current (e.g. CO2 collectors in the sky). This seemed to be impossible without expending energy to actively move against the current.


    B) I thought about what is likely to replace today's mostly three bladed windmills that barely scratch/tap the lowest percent of the troposphere (100m of 10km). Obviously some silent sail like air accelerator/decelerator sheets/cloth/sails should become possible.
    Future "power-windsails" may be quite a bit bigger than today's windmills but they still need to be linked to the ground for counter-force and counter-torque. To avoid excessively large bases advanced sail like wind generators probably would not be made excessively large (that is a large fraction of the 10km troposphere). Also giant towers permanently emanate the danger of them coming crushing down.


    C) I thought about extracting the potential energy from rain-droplets: Clouldn't one look at clouds as almost everywhere available catchment lakes in the sky?



    >> So here's the idea:


    Specifically what came to me was to massively employ lighter than air structures in form of aerogel like "strings/filaments" (quite thick in diameter) that are tied/anchored/thethered to the ground and also intermeshed with themselves up in the sky. In the following I will refer to those structures as aerial meshes or airmeshes or airgrids. Keeping everything held at all times. This is kind of remotely similar to the principle of machine phase in the nanocosm and it too comes with a some advantages.


    These structures seem to be easy to errect in giant scales. They could be applied for:
    * aerial traffic
    * large scale energy extraction
    * and even reversely as means for super large scale strong weather control (ozone too)


    Beside spanning "windsails" in the mesh loops of the "air grid" obviously "solar sails" are also possible.
    Also there may be rains sails whick I'll explain later.
    All sails could/should be equipped with temporary deployment capability and modes that let through part of the wind (lamellas?).



    >> Wind-loads


    Obviously one must worry about excessive windloads.


    Even uncompensated advanced materials might be able to withstand windloads (estimations needed) the floating air strings / air filaments could be armed with a dense rope in the core. Assuming a density of 4kg/dm^3 a strong rope of about 1cm diamater needs to be embedded in an lighter than air string of at least about half a meter so that it starts floating.


    To prevent getting critical loads and temporary collapse of the metamaterial due to windpressure making it temporarily non-buoyant there is the possibility of windload compensation.
    Luckily with APM there's no additional cost making the whole surface an active "living" structures.
    By integrating two other technologies windoads may be reducable to acceptible levels or even completely compoensatable.
    Conveniently when there is windload there is also local power for the protection mechanisms.
    Two main technologies usable for wind-load compensation are: (names freely invented)


    A) "infinitesimalbearing parallel motionion cloaking"
    B) "adiabatic normal motion cloaking"


    A) "infinitesimalbearing parallel motionion cloaking" (this was presented by Josh Halls in his book "Nanofuture" as a means for propulsion) When air moves parallel to a surface the surface is moved with the same speed in the same direction. This replaces friction in air with much lower friction of "infinitesimal bearings" that are integrated in the air-vessels (or here air mesh strings) topmost surface layers.


    B) "adiabatic normal motion cloaking"
    When the aforementioned technique is used the air still needs to get out of the way sidewards of an obstacle.
    While the aformentioned technology/technique can compensate for parallel air motion there still remains a motion component that is head on to the surface. Obviously this must be a motion of one period/impulse of incoming and then outgoing air in the frame of reference that is moving with the parallel motion compensation speed (I hope that formulation is sufficiently comprehensible).
    What one would try here is to "grab" pockets of air compressing them down as they approach (this heats them up so they must be kept sufficiently thermally isolated to not loose their enegry) and then expanding them up again. This technique may be capable of reducing bow waves. (Though I'm rather wary about whether this could/would work or not.)



    >> Robustness against lightning (and ice loads)


    Obviously one must worry about lightning. There seem to be two polar opposite options.


    A) Adding lightning protectors of highly conductive material. On a large scale this would probably be a bad Idea. They are likely to negatively influnece weather by quenching thunderstorms and air to ground potential in general.


    B) Making the "air-strings" electrically highly isolating (not hard for an aerogel metamaterial out of high bandgap base material).
    A thin layer of intermediately conducting water droplets that heats when lightning strikes (it converts to plasma and may damage the surface) may be avoidable by making the surfaces highly hydrophobic. As a nice side effect combined with small scale active surface movement this can also prevent any ice deposits and thus dangerously high ice loads.


    A&B) A third option is to make the structures switchable between the two extreme states.
    This may allow to extend the weather control to electric aspects of the atmosphere.


    Avoiding long stretches of electrical conductors (km scale) generally seems to be a good idea.
    By exclusively resorting to chemomechanical energy transmission one gets resillience against directly hitting solar storms (giant protuberances directly heading towards earth that would be devastating today due to induction of high voltages in long power lines) and maybe even even resilience against EMPs from not too near atomic blasts (that hopefully will never happen).



    >> Exotic untapped energy forms:


    There's a constant quite high electric field between ground and sky (aerostatic electricity).
    I don't know how much energy is in there and what would happen if large fractions of this electric reservoire where to be extracted or boosted. There's some questionable science going on there with todays pretty limited technology.
    Simple experiment:


    A little more dangerous:


    Slanted horizontal "sails" hanging below the clouds could be used like funnels guiding the rainwater to the "air-mesh-filaments" that then act like eavestroughs in the sky allowing to tap the full potential energy of rainwater. Then we wouldn't depend on a mountains with a suitable high up valley that can be blocked anymore.
    Most of the rain must be redistributed at a lower level (like a shower head in the sky - rain sails ?!) to not negatively influence vegetation. Yes that sounds ridiculous but it might make sense.



    >> The structure of the lifting metamaterial


    For furter discussion of the limits of the technology I need to go a little more into the detail of the structure of the lifting metamaterial. These ultra light metamaterials are made out of cells with thin gas-tight walls and internal 1D trusses (possibly fractaly arranged) that prevent collapse from external pressure. Advanced surface functionalies of the airmesh strings are not located on every cell wall but on the outermost walls of a "sky string" or independent balloon. These outermost surface functionalities are not part of the base metamaterial. The "sky strings" have many basic cells throughought their diamater. The main function of the walls of each cell is just gas exclosure. This compartmentalisation that is finer grained than the whole air string gives some redundance and safety. If the metamaterial is made out of an uncombustible base material like sapphire then there is little to no chance that these structures come crushing down. Nice! The internal trusswork might be equipped with active components to adjust cell sizes a bit such that buoyancy can be adjusted. Too much buoyancy is bad too because of too much upward pulling force on the anchor points.


    ... 10,000 character limit ...

    I for one am regularely checking in here to see if theres anything new.
    And I'll will continue to do so.


    What kept me from posting?


    1) I was visiting the first ever Makerfaire in Vienna Austria showing of my collection of 3D prints.
    I also made a lot of graphical Infosheets for A4 flipcharts about 3D printing and APM.


    2) I tried to keep myself up to date with cutting edge new high level stateless interactive programming methodologies (applicative functional reactive programming) since I think this will be of paramount importance for 3D modelling the future reality (elm, purescript, GPU stuff, ...).
    Actually the programmatic 3D modelling software I currently use (OpenSCAD) puts a major pressure of suffering onto me since with its lack of higher order functions it does not allow me to create higly reusable libraries (specifically I hit a wall with gears & threads). This stops a lot of other ambitions in its tracks. Stuff that depends on gears and threads which obviously is a lot.


    3) What I also did was documenting a first draft of an idea I had regarding macroscopic self replication.
    I've published it here:
    http://reprap.org/wiki/RepRec
    (It depends on gears and threads :S)
    I think a working self replication pick & place robot capable of performing practical tasks may make the concept of exponential assembly be perceived as a more plausible thing.
    Even though in the makrocosmos theres gravity but no sticking force and in the nanocosm its the other way around (actually there is gravity but its overpowered by thermal noise) and thus macroscopic block based self replication actually only partially overlaps with nanoscopic block based self replication.
    The the existing approaches of self replication pick and place robots like this one:
    http://rpk.lcsr.jhu.edu/wp-con…ses13_An-Architecture.pdf (Matt Moses et al)
    Use too way too view preproduced base part types. This is making them clunky and totally impractical e.g. for automated 3D-Printer-assembly. Also existing approaches use clipping or friction (including friction in screws) for assembly.
    This makes large systems out af many small parts unnecessary lacking stiffness.
    I think that using the principle of reinforcement like in concrete construction is the right way to go.
    Just making diversely profiled short modular rod segments with channels going through where rebars can be fed through that themselves are comosed out of modular short segmented chains.
    Recently there came out a new model of self replicating 3D printer: http://dollo3d.com/
    I think the herringbone drive method is a good solution the mounting method namely friction plugs not so much.


    Here's some other stuff of of what I was up to lately:
    * I'm still steadily extending my APM wiki.
    * I just barely started moving the highly graphical german language presentation stuff I have lying around into the english wiki (I need to focus more on that)
    * I still didn't get around to making those youtube videos. I regulary think about them.
    * I still didn't get around to making really nice drawn illustrations.
    (I know that I potentially do have that drawing skill level)
    * I was watching some videos about google tango google daydream and a bit off deep learning.


    ----


    About moelcular biology of the cell: I once visited a two semester course about the topic.
    Pretty interesting stuff - the things that suprised me most where:
    * that the transcription from DNA to t-RNA and from t-RNA to proteins happens in a pretty parallel fashion. It looks like a sparse feather.
    * that the visalisation pictures of cell membranes are basically all very wrong showing way to much lipid layer and way too few proteins going through and showing the size ratios very wrong too.
    * the character of the chain of energy transport with something like waiting position control points
    * the crazy density of endoplasmic reticulum in the cell not clogging up all the transport
    * the effect of compartment dimensionality on diffucion transport.
    * the rich RNA world beside the Protein world


    It seems that only a tiny fraction of molecular biology is really directly applicable to even the early bioinspired stages of advanced APM bootstrapping. I think It'll still take some more time that topic dedicated resources and courses are made.


    You seem to have read a lot about history too. This not a place where I usually would spoop around since it is even further away from advanced APM than molecular biology. Since I'm unlikely to read this set of literature if you've found/find something that may suprisingly be applicable specifically to bootstrapping APM dont hold back telling us here. Even if its just a hunch.


    ----


    Awesome that you now have got a 3D printer :)
    I do have an Ultimaker original (one of the very earliest batch)
    You say that you'd like to "design and build some nanotech related tools".
    Do you have anything specific in mind?
    I do have made quite a set of APM principle demonstration objects by now.
    (I need to post a picture)


    >> "It's amazing sometimes how much time one expends on the allegedly "trivial" aspect of mounting and arranging parts in some apparatus."
    Yes 3D printing can be very time consuming. I usually spend much more time designing than printing.
    Luckily my printer is in a state where it is operating without a hitch almost all of the time so this is not a place where I loose much time anymore.


    ----


    PS: I have at least two further major post for the forum in the pipeline.


    PPS: I don't really know what keeps others from posting.
    I think I vaguely know what Eric Drexler is up to right now:
    With molecular sciences more and more on the right track and the recent advances in machine learning (deep learning / deep dream)
    I think Drexler has switched his main forcus to artificial intelligence (tensorflow & stuff?).

    Quote from Jim Logajan

    I'm probably missing something, but I don't see how one can draw any inference about future tools from the first set of invented tools for molecular carbon mechanical synthesis. I skimmed sections 13.3.7, 13.3.8, and 8.5.2 of Nanosystems and it does not look to me like there are any dangers. Even if none of the released binding energy were stored for later re-use, the universe is awash in thermonuclear energy. I think it is something of a technical oddity that so much of it is temporarily inaccessible. Energetically inefficient nanotech production would still be, on a relative scale, far more efficient than current tech production.

    Ok, I see that I missed some important points in my initial post.


    1)
    I do not infer a limitation on future toolsets based on this early toolset-feasability analysis.
    In fact I try to show the contrary. I reckon that the "proven" possibility for energetically reversible mechanosyntheis should imply the possibility of bond-topological reversible mechanosynthesis.
    So that it even makes sense to talk about bond-topological reversible mechanosyntheis (dis-assembly) - (establishing discussion basis)


    The point I'm trying to make is that in a competitive market early attempts are unlikely to wait with production till recycling is perfectly figured out.
    And that bond-topological reversible mechanosynthesis (by coupling separate mechanosynthetic reactions together in the background and applying the illustrated principle) looks to be a lot more difficult to archive than bond-topological irreversible mechanosyntheis. There's the additional difficulty that I haven't mentioned yet that one has to do more than the easier open-loop-control to take stuff apart that has been damaged (radiation,heat,...).


    In short what I am really worried about is the temporal sequence of development.
    In the early development stages (DNA, proteins) bond-topological-irreversibility is irrelevant since bio-organisms can do the recycling for us. But when we start to arrive at the diamondoid stuff (swimming like plastic or even floating in the air) and arrive at high production volumes before the cleanup is fully figured out we might be in for trouble - damaging nature.


    I have no clue how much influence we will have on the temporal sequence of development - I'd guess rather little.
    What I think should be easier to archive than bond-topological reversible mechanosynthesis is making nanosystems of many
    reusable small parts instead of fusing them together to a single monolithic crystal.
    This way diamondoid products can at least by recycled to themselves for a while.


    2)
    I do not think high energy consumption prevents recycling.
    Actually the contrary. The fact that the atom by atom assembly step is the most energy consuming (because of most surface area not because any lack of efficiency) gives a strong incentive to make bigger parts ("crystolecules") reusable. That is: not fuse them all together to a single macroscopic block. Also production by recycling of "crystolecules" (~<32nm) or even bigger "microcomponents" (~<1um) should be much faster since less waste heat has to be removed.


    X)
    I think there's a recurring pattern in history that stuff gets produced in masses when it can't yet be disposed of and that it then produces problems due to its piling up. Side note: Beside human civilization also nature provides such examples albeit on much bigger timescales. Examples are "the great oxygenation event" and "the lignin catastrophy" (less known)
    I think the widespread belief (under the ones that even know about APM) that nanofactories are except from that pattern might cause problems. Blindness for the possible danger waste. What will really happen all depends at which capabilities we arrive when.


    I think in any case we'll need to have waste management guidelines.
    I'm collecting info about recycling on my wiki:
    http://apm.bplaced.net/w/index.php?title=Recycling


    ---------------------



    Quote from lsuess

    Can one imply from energetically reversibility to bond-topological reversibility?
    ....
    I think to find an answer to this question is highly relevant for recycling (in the advanced end of the technology spectrum)


    I wrote nonsense there. - Your comment helped me to reformulate what I really meant:
    How much effort will it take to archive bond-topological reversibility?
    Given that the possibility of energetically reversibility should imply the possibility of bond-topological reversibility.
    I think to find an answer to this question is highly relevant for recycling (in the beginning of the advanced end of the technology spectrum)

    Quote from Jim Logajan

    I'm afraid I don't know what the image is supposed to be showing me. None of the axis are labeled - do you have some additional context or discussion somewhere?


    I made an annotated version now.
    (I've attached the original lossless vector graphics (*.svg) in zipped form -- *.svg not allowed)

    In this video ( "Mechanosynthesis - Ralph Merkle & Robert Freitas" )
    R.Freitas says that you don’t take mechanosynthesized stuff apart again.
    See here: (Jump forward to 47:42)


    Guy in audience: "If I place a germanium incorrectly which tool do I use to get it off."
    R.Freitas "You don't."



    So the Set described in the there discussed tooltip paper
    ( http://www.molecularassembler.com/Papers/MinToolset.pdf )
    is not reversible in the bond-topology-state.
    The set was also not simulated in a way that maximizes reversibility in energy but instead in a way that
    makes it somewhat reliable at 300K (E_react = 0.40 eV gives P_react = 2*10^-7)
    and extremely reliable at 80K (E_react = 0.40 eV gives P_react = 5*10^-26 at)
    But most importantly the reactions where considered standalone and uncoupled to others.



    According to E.Drexler mechanosynthesis can be made to archive very high levels of energy reversibility:
    Nanosystems 13.3.7.b.
    ... reliable mechanochemical operations can in some instances approach thermodynamical reversibility in the limit of slow
    motion. ... ... The conditions for combining reliability and near reversibility are, however, quite stringent: reagent moieties must on encounter have structures favouring the initial structure, then be transformed smoothly into structures that, during separation, favour the product state by ~ 145 maJ (to meet the reliability standards assumed in the present chapter). ...



    * "smoothly" I think means forces times movements must be captured in the machine phase background. Holding against pulling force - preventing ringing snapping.
    * Furthermore I think that one needs to couple multiple reactions with E_react-one<<kB*T energy-loss per deposition/abstraction
    together to E_react-all>kB*T as a whole to prevent the single reactions from running backwards.


    I made a 3D model for visualizing the qualitative progression of the energy wells that is necessary for a energetically reversible mechanosynthetic operation. This model is quantitatively disconnect from any particular physical process like e.g. hydrogen abstraction.
    http://apm.bplaced.net/w/index…nosynthesis_principle.jpg


    The question is:
    Can one imply from energetical reversibility to bond-topological reversibility?



    Surely it seems difficult to rip out a carbon from the centre of a flat diamond say 111 surface.
    But if the atomically flat plane does not have macroscopic size one can start from the edges where less than three of four bonds are inaccessible. Astoundingly there was an AFM experiment conducted where on an atomically flat surface buried tin atoms where controllably flipped with silicon atoms and vice versa (surface-to-tip).
    https://www.uam.es/gruposinv/s…gy_4_803_Custance_AFM.pdf
    They used a lot of tapping and akin to what E.Drexler describes as "conditional repetition"



    I think to find an answer to this question is highly relevant for recycling (in the advanced end of the technology spectrum)
    The official nano-factory video says something like "the only waste products are clean water clean air and heat".
    But what about the product itself once its microcomponents become obsolete?



    If mechanosynthesis can't be made bond-topologically reversible from the early on start the only way to get rid of obsolete versions would be by:
    * burning them - only possible if they don’t form slack due to incorporated Si,Al,Ti,...
    * dissolving them (Sodium beam treatment, Acids, ...)
    If even that will not be done we might sink deeply into diamondoid waste.



    I think that might be the most severe and most overlooked danger of APM.

    Many thanks for migrating all the posts from the beehive forum to wotlab forum for better maintainability. All the posts seem well preserved :) - perfect job.
    Also I'm delighted to see that you've implemented my suggestions for sub-forum topics.


    Now everything is ready for a great year 2016 :)

    >>... Also, the underlying hardware operates imperatively (it has state that changes with time) so there is a mismatch between the declarative notation and what is actually occurring on the machine. ... <<

    Strong objection!!

    In advanced sensible target nanosystem (excluding early slow diffusion based nanosystems e.g. DNA)
    The lowest parts of the underlying hardware needs to be near reversible to prevent excessive heating.
    And the needed *reversible low level logic has NO inherent state that changes with time!*

    To elaborate on that:
    All the internal apparent state (I'll call it pseudo-state) is completely predetermined by the starting state (I'll call this genuine-state) which is located at a higher level. This is the case because the bijective transformation steps (which define reversibility) allow no branching in "foreward" or "backward" direction of execution steps. The internal pseudo-state can appear big (in memory usage) to an in relation small external genuine state because the pseudo-state is just decompressed genuine-state. Decompression introduces no additional information (state). Since stretches of low level reversible computation are as shown stateless they are pure functions and *inherently functional*!

    About the length distribution of reversible stretches: (granularity and upreach):
    To save a maximum amount of energy one needs to cover the lowest HW level with many long stretches of reversible computation. Accomplishing that shouldn't be a big problem at the lowest cores of a nanofactory where you have the rather simple problem of churning out a very great number of identical standard parts via simple open loop control. Further up in the the physical assembly hirachy it might become more interesting with richer part composing situations and more complex nano- to micro-logistics - more on that later. It is possible to composably program long and big lowest level reversible computation stretches (obviously they are not monolithic). It will be done and it necessarily is purely functional - otherwise reversibility would be destroyed. There is some research about reversible assembly languages - I currently can't guess wether those will or won't be programmed "by hand".

    ---- An alternate approach:

    I have another way to show why I think its unsurprising that low level hardware is most often perceived as inherently stateful although this is wrong. For this I'll need to briefly describe a maybe (??) barely known concept that is IMO very important:

    The concept of *Retractile Cascades* (as I understand them):
    Legend: X,Y,Z
    ... same number of arbitrary bits - words bytes whatever but equal
    ... X's dont have mutual equal content so have Y's and Z's

    When computing reversibly (e.g. with rodlogic)
    1a) (X+YY+YYYY+...) starting from the input X first grow in the addition of used memory space per computing step as far as necessary (In rod logic every step corresponds to pulling all rods of an evaluation stage)
    1b) (X+YY+YYYY+YYYYYYYY+YYYY+YY+Z)=M then shrink the still memory usage increasing steps back down till a small desired result Y is reached in the last step.
    1ab) Overall there is monotonous growth in used memory space - first fast then slow.
    2) (Y':=Y) make an imperative target destructive copy from the output Y. This causes some waste heat but not too much. Caution: Y's information content (entropy) and memory space usage are distinct things.
    3ab) M-Z-YY-YYYY-YYYYYYYY-YYYY-YY=X finally the original result Y at the end of the cascade can be reverse executed ("retract the cascade") to free the used (garbage filled) memory space for the next computation in a (near) dissipation free manner. The cascades input X is then ready for an imperative destructive update that starts the next cycle.

    So basically a retractile cascade is a stretch of reversible commutation optimized to save energy.

    Now - to show why this can seem imperative - to show why pseudo-state may seem like genuine-state - such a retractile cascade can be visualized as a directed acyclic graph that is depicting the mutual dependencies of the memory cells. It starts at a root input node branches out and then merges back to a single output node. If one crops out a patch from the centre of this graph and asks how a particular value/bit emerges at a particular node inside this patch while only having the cropped out piece of the graph available for reconstruction one needs a lot of genuine-state on the edges of the cropped out square namely all the places were the incoming edges (or the outgoing) cross the border of the patch. If the observed context (cropped patch) is too small stuff that is actually functional stuff appears to be imperative stuff. The other way around: If you have sufficient knowledge to move your horizon of perception farther outward, more of the true functional nature of seemingly imperative stuff becomes visible.

    I think because of this often unavoidable limited context tunnel view combined with the fact that energy saving reversible logic is still a thing of the future is one of the main reasons why low level hardware is likely to be mistaken to be inherently stateful.

    (analysis->design) For the actual design of reversible computation (instead of the here done analysis) one *needs* sufficient horizon to become functional and thus reversible and efficient. Curiously and luckily its possible to built up this big horizon from small functional building-blocks.

    The abstract gist I see here is that: *statefulness is a relative property*
    since the border between genuine-state and pseudo-state is movable by changing the context.
    genuine-state == information of unknown cause to the observer
    pseudo-state == decompressed information of known cause to the observer
    The cause is the compressed input information.

    ---- Leaving the reversible realm:

    Genuine-state, destructive updates and random number generators( RNGs) are undoubtably necessary at some point.
    So did I just shift the mismatch problem upwards and draw a picture of a "functional-imperative-functional burger"? I'am not so sure about that.

    The lowest level occurrence of those troublemakers is at the places where the stretches of reversible computation connect. As mentioned before at those connection places some genuine-state information is located. This is information that contains some decisions by practicality "config-axiom-variables". But state constants are actually functional (pure functions are constants too) the real issue are irreversible operations. When going beyond stretches of reversible computation like retractile cascades what does it mean to include irreversible non-bijective operations? From an analytic perspective: What if the observed context grows big enough to enclose irreversible joins (deletions and destructive updates) or forks (spawned randomness)?

    Joins - while not reversible any-more - can remain functional (same outputs on same inputs). This is often seen in functional libraries that seal imperative code. By carefully packaging deletions and destructive updates up into a functional interface one can restore the functional code maintainability benefits and carry them upward to a higher code level (nestable & scalable). Something similar is often seen in haskell libraries but there often rock bottom lowest level imperative code is isolated that would be unsuitable because of way too high density of irreversible updates. Today using and hiding destructive updates seems reasonable in all situations.

    Forks are actually difficult to create. For deterministic PRNGs there always can be found a context which shows that there actually are no forks. TRNGs (quantum random and physical noise) seem to be truly unisolatable forks. For all practical purposes they seem to introduce absolute genuine-state and thus they may be the one single exception to the relativity of statefulness.

    Since longer reversible stretches are desirable the connection points of stretches of reversible computation do not lie at rock bottom but at at a higher level. On a even higher level than this higher level namely on the level of multiple joined stretches of purely reversible computation it is yet rather unclear what ratio of reversible to irreversible steps is to be expected (pervasiveness of irreversibility). In an advanced nanofactory The reversible hardware base will reaches to the "height" where the efficiency argument looses its grip. If that is high enough the software maintainability argument might kick in before the efficiency argument runs out. Then there'll be no space for an imperative layer in the aforementioned "programming style burger" any-more.

    >> ... so while I understood the concept, it was not easy to figure out how to express the same program declaratively. ...<<

    I had that exact same experience.
    Namely with maze generation algorithm and a small bomberman game.
    Both seemingly inherently stateful.

    I think the answer to that long standing problem is:
    A.) usage of modern functional data-structures and
    B.) usage of modern functional programming capable of handling interleaved IO

    ad A.) There are already libraries for data-structures with O(1) random access and cheap non-destructive in place updates implemented by diffing. Haskells (awfully named) vector library is an example.

    There's the common critique of slowness due to fine grained boxed data-structures.
    Today this is solved by workarounds (aforementioned sealed imperativity in libraries for functional languages)
    But I'd guess that at the microprocessor level of advanced nanofactories (not rock bottom) there'll be some architecture optimized for functional language execution that circumvent the so called "Von Neumann bottleneck". Today there exists a cool demo running on FPGA hardware called "Reduceron".
    https://www.doc.ic.ac.uk/~wl/i…ts/papers/reduceron08.pdf
    It says game changing performance
    https://github.com/tommythorn/Reduceron

    ad B.) first order functional reactive programming (FRP 1.ord)
    This I just recently encountered with the "elm" language - It blew me away.


    ------------------------------- Performance:

    Regarding lower level:
    >>... For operations requiring real-time responses, such as nano-systems operating in-situ, imperative programming may still be the only realistic choice. ...<<

    Most low level stuff in nanofactories will probably be dead simple open loop control.
    Strong determinism of functional languages is a good basis for reliable systems.
    Nonetheless I guess I need to read up about this a bit.

    Regarding higher level:
    >> ... my understanding is that it is difficult to get predictable performance from existing implementations. ... <<

    This is often mentioned in light of lazy evaluation.
    Lazy evaluation is not inherent to but made possible by functional programming.
    (non-strict -> choose best from both worlds)
    I personally do not have practical experience with laziness in big projects.
    I weren’t running into very many complaints about it.
    Here's Some low noise commentary on this:
    https://www.quora.com/Is-lazy-…in-Haskell-a-deal-breaker

    I feel like there has been a lot of improvement over over the last years in existing implementations (languages + libraries). By now there are (at least for haskell) many pre-built libraries for (both lazy and strict) purely functional data-structures with known Landau complexities for time and space (amortized or worst case). Those data-structures contain all the clever and or dirty work needed to avoid the usual inefficiencies and space-leaks of naive implementations.
    There are quite a view efficiency demos for functional languages (not sure how objective the spectrum is). Especially intensive number crunching with non-trivial parallelism (multi-core not GPU) is said to be way easier to program with functional language enabled "software transactional memory".


    ------------------------------- Usage:
    >>... That project lasted about a year, after which I did not encounter any use of declarative languages.<<

    Many people in the 3D printing maker community (including me) are using "OpenSCAD" a declarative purely functional programming language (with C like syntax). In fact I do my 3D modelling work almost exclusively with it. The restriction to the description of static non interactive objects makes it very different from "normal programming" though. A nanofactory is a lot about 3D modelling.

    Constructive solid geometry can be made incredibly elegant in functional languages.
    I did an experiment here:
    http://www.thingiverse.com/thing:40210
    Then there's the super powerful lazy infinite multidimensional automatic differentiation method invented by Conal Elliott - very useful for gradients normals curvatures and whatnot in 3D modelling - (taylor-series).
    This and other bleeding edge stuff is AFAIK integrated in the haskell 3D modelling program "implicitCAD"
    written mainly by Christopher Olah.
    Sadly there are two major problems. One: its still horribly hard to install which brings me to the point of dependency hell. There's the functional package manager Nix - another example for practical application of functional programming. And two: This one is actually too complicated to go into here.

    On an other side there is functional reactive programming:
    Conal Elliott again:
    https://www.youtube.com/watch?v=faJ8N0giqzw
    With first order functional reactive programming (elm - designed by Evan Czaplicki) interactive programming seem to become a breeze - actually it promises to be easier than in imperative languages.
    That should open up the usage space quite a bit.



    ------------------------------- Collected side-notes:
    * Reversible actuators:
    Bottommost reversible hardware includes not only reversible logic but also reversible low level mill style actuators for the mechanosynthesis of standard-parts.
    * Motivating:
    Even in the reversible retractile cascade stretches some irreversibility needs to be added in the clock to give the nanofactory minimal but sufficient motivation to move in the right direction - changing pseudo-future into to genuine-future.
    * Pipe-lining:
    Unfortunately retractile cascades seems to blocks pipe-lining quite a bit (like Konrad Zuses mechanical four-phase pipe-lining in the Z1 purely mechanical Von Neumann computer). It probably comes down to a trade-off between dissipation and speed.
    * Unwanted (?) Freedom:
    Looking again at a point in the dependency graph one can create a partitioning very akin to the light-cone - the "dependency cone". One can find an area with nodes that are neither in the psuedo-past nor in the pseudo-future of the analysed node. In an actual implementation all those non-interacting nodes must be shifted to relative past, present or future. Thus there is some freedom of asynchronicity. Additional state is needed to fix this free undefined parts of a stretch of reversible computation. The obvious choice to fix this is to use a synchronizing clock. After that one can look at all the pseudo-state slices of the reversible-computation-stretch-pure-function by a one dimensional slider. Inside a retractile cascade (with the clock included!) there are no rotating wheels reciprocating rods or other parts that move freely (that is thermally agitated). Everything is connected. Thus the whole intermeshed nano-mechanic system has only one single degree of freedom. The whole process is fully deterministic.
    * Reversible computing:
    bijective mapping ->
    no spread in state space neither in pseudo-future nor in preudo-past direction ->
    constant entropy -> no arrow of time -> no real future and real past
    In contrast to imperative stuff which introduces the situation where you "split reality"
    Y-join: bit deletion (overwriting) - (possibly?) multiple pasts - system entropy decreases
    Y-fork: random bit - multiple futures - system entropy increases
    * (Evaluation stages in retractile cascades do not contain equal information/entropy but the snapshots of the whole Cascade between stage evaluation steps do. -- entropy(output)/entropy(input)=?? )

    As you know I am not very fond of commercial solutions
    as they thend to be not very fond of portability (reason: user binding).
    If you feel confident that the next switch (conservative assumption that it will come) will be automatable with minimal data loss there should be no problem.
    That aside:

    Migrating the current posts seems to be a managable task
    * there are only 22 threads yet
    * at the current posting rate [ :( ] it seems very reasonable to catch up.

    Bottom line - If you provide a new forum I'll try to migrate the posts (all).
    BUT there seems to be a problem though. I won't be able to accurately reproduce the threads since I won't and shouldn't be able to post in the name of others.

    Sidenote: I fear that our ongoing migration discussions keep others from posting because they might fear their posts will be lost (*). Or didn't any members visist anyway this last month?

    (*) btw is there an localized packaged archive of the old newsgroup messages?

    "programmable matter" has become a buzzword by now (2015).
    It seems to refer
    * more to active than passive stuff
    * to makro- & micro-scale stuff
    * and usually not to atomically precise stuff

    With nanofactories and especially with fast recomposers for pre-mechanosynthesized microcomponents
    things will become more like: "materializable programs"
    (that includes passive nanoscale atomically precise materials)

    I think one can't overstate the importance that software will have in a world where programs literally are tangible reality.
    If we build our software palace on rotting wooden (and well hidden) code stakes we might be in for a massive unexpected crash.
    Out of this (and other) reasons I did some digging into available knowledge about the IMO most advanced practical programming techniques that are currently known. Functional programming.
    As it turns out there are quite a few connections between functional programming and atomically precise manufacturing (reversible logic is just one of them). In an attempt to gain a clearer picture of the relationships I created a buzzword graph
    (bottom of post).

    I'd recommend checking out the currently reviving language Haskell
    and/or the new baby language elm (the first first order functional reactive language there is)
    My first contact with functional programming was via a course which required the reading of the article
    "why functional programming matters" - for me it was quite an interesting and eye opening read.

    Here it is (CC BY - Lukas M. Süss):
    (The source file editable with the free java program "yEd")

    Please share your thoughts.

    Attached files functional-programming-for-materialisation.graphml (215.8 KB)
    Quote from JimL (JIMLOGAJAN)

    >> So if you see major technical advantages (excluding experimental features) in UBB.Threads maybe this is the better solution.

    I haven't come to any decision yet about what will replace Beehive. UBB.Threads is only better than Beehive in some aspects; it remains to be seen if it is optimal. I do know that when I try to use a browser on an iPad with Beehive, it can't handle quoting (in fact the quoting mechanism isn't very good in general.) I need to do further research.

    I realized that the quoting button in the editor is unresponsive sometimes (both on firefox and chrome) - as an (admittedly ugly) workaround it is possible to hit the quote button under the post to reply to before hitting the reply button and then awkwardly copy past the quote box.

    About your further research:
    Thanks for your effort, keep us informed and if you run into any troubles just let us now :)

    Quote from JimL (JIMLOGAJAN)

    >> * problems in the closed source parts -> tell them and pray

    Problems in open source -> tell them and pray or solve the problem yourself.

    The "solve it yourself" option (you in plural - ok its part of "them") is IMO one more option that can become really powerful once the molecular nanotechology community grows beyond a critical level (I sure hope it will). A strong self-preservation interest might be a powerful motivation.

    Quote from JimL (JIMLOGAJAN)

    >> * company goes out of business -> you're forced to leave the sinking ship

    Unless the last release is broken there should be no reason to switch software. I frequent two piloting forums that use vBulletin that stopped upgrading to the latest release of vBulletin years ago - the company may as well have gone out of business. They seem to have survived OK.

    I think it still may be decades till APM takes off not just a few years. One of the things I am most afraid of are emerging super smart spamming AI's that might emerge not too far in the future. In this regard I am afraid to become a sitting target.

    Oops I've completely overlooked that inyoka is german only since its my mother tongue => not an option.

    >> "... Commercial or proprietary aren't an issue so long as support is available and competent without costing great amounts of money."
     
    I personally prefer open source software mainly out of those reasons:
    * problems in the closed source parts -> tell them and pray
    * company goes out of business -> you're forced to leave the sinking ship
    * others

    Based on the open/proprietary criterion I'd prefer PhpBB over UBB.Threads.

    But you obviously did quite a bit of research already judging from your comment:
    >> "I've never liked the vBulletin software and the company appears to have had some past internal problems. ..."

    So if you see major technical advantages (excluding experimental features) in UBB.Threads maybe this is the better solution.

    Please don't decide too hasty for a new one.

    I think the most important things are
    A.) lots of usage which forces the (big) user community to keep it working and documented
    B.) the possibility of easy backup and migration

    I went for mediawiki for my peronal wiki since wikipedia uses it too (mediawiki doesn't shine in point B though)
    There is nothing comparable to mediawiki-wikipeia in the world of forum software though.
    The two currently (2015) most commonly used forum softwares are according to a quick investigation I just did phpBB and vBulletin.

    1.) Among the forums I regularely visit the software I most often encountered was phpBB
    https://de.wikipedia.org/wiki/PhpBB  -- open source with friendly homepage

    2.) vBulletin -- this one is proprietary though :S
    https://de.wikipedia.org/wiki/VBulletin

    Might Inyoka be an option ?? -- I'm always amazed about the quality of documentation for ubuntu.
    https://ubuntuusers.de/inyoka/
    It seems to be a multi purpouse CMS (wiki + forum + blog ?)
    I just found this and haven't looked into it in any detail.

    I guess you know this page already:
    https://en.wikipedia.org/wiki/…f_Internet_forum_software

    Quote from JimL (JIMLOGAJAN)

    Will try to respond to your posts next week, but I do have a quick answer regarding the fate of the Nanorex software. Fortunately the software was released as open source and can be found here on github:

    https://github.com/kanzure/nanoengineer

    Thanks,
    Well I did new some sources for the software (as you can see here down in my instructions section: http://www.thingiverse.com/thing:13786)
    I was mainly referring too all the valuable documentation in the nanorex development wiki and the original centralized visual publications of all the crystolecule together at one place with detailed descriptions and background info beside. It kind of hurts that all this is gone.