Basics: materializable programs

  • "programmable matter" has become a buzzword by now (2015).
    It seems to refer
    * more to active than passive stuff
    * to makro- & micro-scale stuff
    * and usually not to atomically precise stuff

    With nanofactories and especially with fast recomposers for pre-mechanosynthesized microcomponents
    things will become more like: "materializable programs"
    (that includes passive nanoscale atomically precise materials)

    I think one can't overstate the importance that software will have in a world where programs literally are tangible reality.
    If we build our software palace on rotting wooden (and well hidden) code stakes we might be in for a massive unexpected crash.
    Out of this (and other) reasons I did some digging into available knowledge about the IMO most advanced practical programming techniques that are currently known. Functional programming.
    As it turns out there are quite a few connections between functional programming and atomically precise manufacturing (reversible logic is just one of them). In an attempt to gain a clearer picture of the relationships I created a buzzword graph
    (bottom of post).

    I'd recommend checking out the currently reviving language Haskell
    and/or the new baby language elm (the first first order functional reactive language there is)
    My first contact with functional programming was via a course which required the reading of the article
    "why functional programming matters" - for me it was quite an interesting and eye opening read.

    Here it is (CC BY - Lukas M. Süss):
    (The source file editable with the free java program "yEd")

    Please share your thoughts.

    Attached files functional-programming-for-materialisation.graphml (215.8 KB)
  • While I think functional programming (declarative) should be considered for implementing higher level nanomachine operations (such as long term planning) my understanding is that it is difficult to get predictable performance from existing implementations. For operations requiring real-time responses, such as nano-systems operating in-situ, imperative programming may still be the only realistic choice. Also, the underlying hardware operates imperatively (it has state that changes with time) so there is a mismatch between the declarative notation and what is actually occurring on the machine.

    I first used a declarative language about 25 years ago. I had been writing code in imperative languages the previous 18 years (yes, I'm an old coder,) so while I understood the concept, it was not easy to figure out how to express the same program declaratively. That project lasted about a year, after which I did not encounter any use of declarative languages.

  • >>... Also, the underlying hardware operates imperatively (it has state that changes with time) so there is a mismatch between the declarative notation and what is actually occurring on the machine. ... <<

    Strong objection!!

    In advanced sensible target nanosystem (excluding early slow diffusion based nanosystems e.g. DNA)
    The lowest parts of the underlying hardware needs to be near reversible to prevent excessive heating.
    And the needed *reversible low level logic has NO inherent state that changes with time!*

    To elaborate on that:
    All the internal apparent state (I'll call it pseudo-state) is completely predetermined by the starting state (I'll call this genuine-state) which is located at a higher level. This is the case because the bijective transformation steps (which define reversibility) allow no branching in "foreward" or "backward" direction of execution steps. The internal pseudo-state can appear big (in memory usage) to an in relation small external genuine state because the pseudo-state is just decompressed genuine-state. Decompression introduces no additional information (state). Since stretches of low level reversible computation are as shown stateless they are pure functions and *inherently functional*!

    About the length distribution of reversible stretches: (granularity and upreach):
    To save a maximum amount of energy one needs to cover the lowest HW level with many long stretches of reversible computation. Accomplishing that shouldn't be a big problem at the lowest cores of a nanofactory where you have the rather simple problem of churning out a very great number of identical standard parts via simple open loop control. Further up in the the physical assembly hirachy it might become more interesting with richer part composing situations and more complex nano- to micro-logistics - more on that later. It is possible to composably program long and big lowest level reversible computation stretches (obviously they are not monolithic). It will be done and it necessarily is purely functional - otherwise reversibility would be destroyed. There is some research about reversible assembly languages - I currently can't guess wether those will or won't be programmed "by hand".

    ---- An alternate approach:

    I have another way to show why I think its unsurprising that low level hardware is most often perceived as inherently stateful although this is wrong. For this I'll need to briefly describe a maybe (??) barely known concept that is IMO very important:

    The concept of *Retractile Cascades* (as I understand them):
    Legend: X,Y,Z
    ... same number of arbitrary bits - words bytes whatever but equal
    ... X's dont have mutual equal content so have Y's and Z's

    When computing reversibly (e.g. with rodlogic)
    1a) (X+YY+YYYY+...) starting from the input X first grow in the addition of used memory space per computing step as far as necessary (In rod logic every step corresponds to pulling all rods of an evaluation stage)
    1b) (X+YY+YYYY+YYYYYYYY+YYYY+YY+Z)=M then shrink the still memory usage increasing steps back down till a small desired result Y is reached in the last step.
    1ab) Overall there is monotonous growth in used memory space - first fast then slow.
    2) (Y':=Y) make an imperative target destructive copy from the output Y. This causes some waste heat but not too much. Caution: Y's information content (entropy) and memory space usage are distinct things.
    3ab) M-Z-YY-YYYY-YYYYYYYY-YYYY-YY=X finally the original result Y at the end of the cascade can be reverse executed ("retract the cascade") to free the used (garbage filled) memory space for the next computation in a (near) dissipation free manner. The cascades input X is then ready for an imperative destructive update that starts the next cycle.

    So basically a retractile cascade is a stretch of reversible commutation optimized to save energy.

    Now - to show why this can seem imperative - to show why pseudo-state may seem like genuine-state - such a retractile cascade can be visualized as a directed acyclic graph that is depicting the mutual dependencies of the memory cells. It starts at a root input node branches out and then merges back to a single output node. If one crops out a patch from the centre of this graph and asks how a particular value/bit emerges at a particular node inside this patch while only having the cropped out piece of the graph available for reconstruction one needs a lot of genuine-state on the edges of the cropped out square namely all the places were the incoming edges (or the outgoing) cross the border of the patch. If the observed context (cropped patch) is too small stuff that is actually functional stuff appears to be imperative stuff. The other way around: If you have sufficient knowledge to move your horizon of perception farther outward, more of the true functional nature of seemingly imperative stuff becomes visible.

    I think because of this often unavoidable limited context tunnel view combined with the fact that energy saving reversible logic is still a thing of the future is one of the main reasons why low level hardware is likely to be mistaken to be inherently stateful.

    (analysis->design) For the actual design of reversible computation (instead of the here done analysis) one *needs* sufficient horizon to become functional and thus reversible and efficient. Curiously and luckily its possible to built up this big horizon from small functional building-blocks.

    The abstract gist I see here is that: *statefulness is a relative property*
    since the border between genuine-state and pseudo-state is movable by changing the context.
    genuine-state == information of unknown cause to the observer
    pseudo-state == decompressed information of known cause to the observer
    The cause is the compressed input information.

    ---- Leaving the reversible realm:

    Genuine-state, destructive updates and random number generators( RNGs) are undoubtably necessary at some point.
    So did I just shift the mismatch problem upwards and draw a picture of a "functional-imperative-functional burger"? I'am not so sure about that.

    The lowest level occurrence of those troublemakers is at the places where the stretches of reversible computation connect. As mentioned before at those connection places some genuine-state information is located. This is information that contains some decisions by practicality "config-axiom-variables". But state constants are actually functional (pure functions are constants too) the real issue are irreversible operations. When going beyond stretches of reversible computation like retractile cascades what does it mean to include irreversible non-bijective operations? From an analytic perspective: What if the observed context grows big enough to enclose irreversible joins (deletions and destructive updates) or forks (spawned randomness)?

    Joins - while not reversible any-more - can remain functional (same outputs on same inputs). This is often seen in functional libraries that seal imperative code. By carefully packaging deletions and destructive updates up into a functional interface one can restore the functional code maintainability benefits and carry them upward to a higher code level (nestable & scalable). Something similar is often seen in haskell libraries but there often rock bottom lowest level imperative code is isolated that would be unsuitable because of way too high density of irreversible updates. Today using and hiding destructive updates seems reasonable in all situations.

    Forks are actually difficult to create. For deterministic PRNGs there always can be found a context which shows that there actually are no forks. TRNGs (quantum random and physical noise) seem to be truly unisolatable forks. For all practical purposes they seem to introduce absolute genuine-state and thus they may be the one single exception to the relativity of statefulness.

    Since longer reversible stretches are desirable the connection points of stretches of reversible computation do not lie at rock bottom but at at a higher level. On a even higher level than this higher level namely on the level of multiple joined stretches of purely reversible computation it is yet rather unclear what ratio of reversible to irreversible steps is to be expected (pervasiveness of irreversibility). In an advanced nanofactory The reversible hardware base will reaches to the "height" where the efficiency argument looses its grip. If that is high enough the software maintainability argument might kick in before the efficiency argument runs out. Then there'll be no space for an imperative layer in the aforementioned "programming style burger" any-more.

    >> ... so while I understood the concept, it was not easy to figure out how to express the same program declaratively. ...<<

    I had that exact same experience.
    Namely with maze generation algorithm and a small bomberman game.
    Both seemingly inherently stateful.

    I think the answer to that long standing problem is:
    A.) usage of modern functional data-structures and
    B.) usage of modern functional programming capable of handling interleaved IO

    ad A.) There are already libraries for data-structures with O(1) random access and cheap non-destructive in place updates implemented by diffing. Haskells (awfully named) vector library is an example.

    There's the common critique of slowness due to fine grained boxed data-structures.
    Today this is solved by workarounds (aforementioned sealed imperativity in libraries for functional languages)
    But I'd guess that at the microprocessor level of advanced nanofactories (not rock bottom) there'll be some architecture optimized for functional language execution that circumvent the so called "Von Neumann bottleneck". Today there exists a cool demo running on FPGA hardware called "Reduceron".
    https://www.doc.ic.ac.uk/~wl/i…ts/papers/reduceron08.pdf
    It says game changing performance
    https://github.com/tommythorn/Reduceron

    ad B.) first order functional reactive programming (FRP 1.ord)
    This I just recently encountered with the "elm" language - It blew me away.


    ------------------------------- Performance:

    Regarding lower level:
    >>... For operations requiring real-time responses, such as nano-systems operating in-situ, imperative programming may still be the only realistic choice. ...<<

    Most low level stuff in nanofactories will probably be dead simple open loop control.
    Strong determinism of functional languages is a good basis for reliable systems.
    Nonetheless I guess I need to read up about this a bit.

    Regarding higher level:
    >> ... my understanding is that it is difficult to get predictable performance from existing implementations. ... <<

    This is often mentioned in light of lazy evaluation.
    Lazy evaluation is not inherent to but made possible by functional programming.
    (non-strict -> choose best from both worlds)
    I personally do not have practical experience with laziness in big projects.
    I weren’t running into very many complaints about it.
    Here's Some low noise commentary on this:
    https://www.quora.com/Is-lazy-…in-Haskell-a-deal-breaker

    I feel like there has been a lot of improvement over over the last years in existing implementations (languages + libraries). By now there are (at least for haskell) many pre-built libraries for (both lazy and strict) purely functional data-structures with known Landau complexities for time and space (amortized or worst case). Those data-structures contain all the clever and or dirty work needed to avoid the usual inefficiencies and space-leaks of naive implementations.
    There are quite a view efficiency demos for functional languages (not sure how objective the spectrum is). Especially intensive number crunching with non-trivial parallelism (multi-core not GPU) is said to be way easier to program with functional language enabled "software transactional memory".


    ------------------------------- Usage:
    >>... That project lasted about a year, after which I did not encounter any use of declarative languages.<<

    Many people in the 3D printing maker community (including me) are using "OpenSCAD" a declarative purely functional programming language (with C like syntax). In fact I do my 3D modelling work almost exclusively with it. The restriction to the description of static non interactive objects makes it very different from "normal programming" though. A nanofactory is a lot about 3D modelling.

    Constructive solid geometry can be made incredibly elegant in functional languages.
    I did an experiment here:
    http://www.thingiverse.com/thing:40210
    Then there's the super powerful lazy infinite multidimensional automatic differentiation method invented by Conal Elliott - very useful for gradients normals curvatures and whatnot in 3D modelling - (taylor-series).
    This and other bleeding edge stuff is AFAIK integrated in the haskell 3D modelling program "implicitCAD"
    written mainly by Christopher Olah.
    Sadly there are two major problems. One: its still horribly hard to install which brings me to the point of dependency hell. There's the functional package manager Nix - another example for practical application of functional programming. And two: This one is actually too complicated to go into here.

    On an other side there is functional reactive programming:
    Conal Elliott again:
    https://www.youtube.com/watch?v=faJ8N0giqzw
    With first order functional reactive programming (elm - designed by Evan Czaplicki) interactive programming seem to become a breeze - actually it promises to be easier than in imperative languages.
    That should open up the usage space quite a bit.



    ------------------------------- Collected side-notes:
    * Reversible actuators:
    Bottommost reversible hardware includes not only reversible logic but also reversible low level mill style actuators for the mechanosynthesis of standard-parts.
    * Motivating:
    Even in the reversible retractile cascade stretches some irreversibility needs to be added in the clock to give the nanofactory minimal but sufficient motivation to move in the right direction - changing pseudo-future into to genuine-future.
    * Pipe-lining:
    Unfortunately retractile cascades seems to blocks pipe-lining quite a bit (like Konrad Zuses mechanical four-phase pipe-lining in the Z1 purely mechanical Von Neumann computer). It probably comes down to a trade-off between dissipation and speed.
    * Unwanted (?) Freedom:
    Looking again at a point in the dependency graph one can create a partitioning very akin to the light-cone - the "dependency cone". One can find an area with nodes that are neither in the psuedo-past nor in the pseudo-future of the analysed node. In an actual implementation all those non-interacting nodes must be shifted to relative past, present or future. Thus there is some freedom of asynchronicity. Additional state is needed to fix this free undefined parts of a stretch of reversible computation. The obvious choice to fix this is to use a synchronizing clock. After that one can look at all the pseudo-state slices of the reversible-computation-stretch-pure-function by a one dimensional slider. Inside a retractile cascade (with the clock included!) there are no rotating wheels reciprocating rods or other parts that move freely (that is thermally agitated). Everything is connected. Thus the whole intermeshed nano-mechanic system has only one single degree of freedom. The whole process is fully deterministic.
    * Reversible computing:
    bijective mapping ->
    no spread in state space neither in pseudo-future nor in preudo-past direction ->
    constant entropy -> no arrow of time -> no real future and real past
    In contrast to imperative stuff which introduces the situation where you "split reality"
    Y-join: bit deletion (overwriting) - (possibly?) multiple pasts - system entropy decreases
    Y-fork: random bit - multiple futures - system entropy increases
    * (Evaluation stages in retractile cascades do not contain equal information/entropy but the snapshots of the whole Cascade between stage evaluation steps do. -- entropy(output)/entropy(input)=?? )

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!