Posts by lsuess

    Thanks for the quick reply.

    Still unlisted on purpose. Thus I wrote "I'll publicly list it soon." above.
    Just shared the video with a select few for now in order to spot potential fatal flaws before going fully public.
    Guess you didn't spot any.
    Will list it later today or tomorrow.

    Any other comments welcome.

    Finally my first long format video is here (followup to the teaser in preceding post):


    I'll publicly list it soon.

    With almost 30min length (~25 without the music end) it's a bit on the long side, but that's how it turned out.
    I covered what I think is the most important yet virtually unknown scaling law.
    Thus I hope educational value.

    There are several minor issues with this first I am well aware about.
    But I have to draw a line.

    Many possible avenues to go from here.
    I have a list of ideas, but I also want to take into account some comments that I'll hopefully get.

    I've put a link to the sci-nanotech forum prominently into the description.
    I hope this may eventually help in bringing back some life to the forum here.

    Jim Logajan
    JACK DECKER
    Anyone?


    I wanted to have the results of the paper:
    – [FiMM] Evaluating the Friction of Rotary Joints in Molecular Machines
    and the numbers in:
    – [N] Nanosystems: Molecular Machinery, Manufacturing, and Computation
    in a comparable form (motivation further down).
    So I converted what I found to common units and combined them together into a single plot.
    I got some surprising results:


    Simulation results in [FiMM] exceeding conservative upper bounds in [N]?


    Estimations in Nanosystems [N] are supposed to be highly reliable upper bounds,
    so somewhat accurate simulations delivering values above (or only slightly below)
    the Nanosystems values are a sign for some potential error.


    The two mutually quite consistent nanotube simulations from [FiMM] are
    both quite far (~10x) above Nanosystems upper bound for the flat-surface-case of equal area.
    It could be that flat surface friction is higher than for very small diameter bearings.
    But that much and more? I doubt it.
    (I maybe need to check out the limit for big diameter bearings)


    The upper bound for the diamond bearing case is (as it should be) above the simulations.
    But only by a for Nanosystems quite small margin of very crudely 3x.


    Have I made some error?
    Or is there a genuine issue?
    A review would be appreciated.


    Regarding the lowest friction dashed red line:
    Nanosystems mentions a special trick that could be used to lower friction by a further whopping 1000x.
    (dropping band-stiffness-scattering drag below irreducible remaining shear-reflection-drag – via proper bearing design)
    This one is way below the friction levels in [FiMM] but that's ok since the simulated nanotube bearing does not apply this trick.
    I'm not entirely sure whether I comprehend that trick correctly.
    It is not visually illustrated in Nanosystems.
    More on that later.


    Suprisingly high levels of friction for the name "superlubricity"


    As just one random example picked form the chart there is about 100W/m^2 at 1m/s.
    Compared to what is possible with macroscale ball bearings that's not small. I wasn't really expecting that.
    Guess "superlubricity" really refers more to the complete absence of static friction
    rather than to small dynamic friction.


    Wikipedia page seems not aware of that:
    https://en.wikipedia.org/wiki/Superlubricity
    What's up here?!


    Luckily the scaling law "smaller machinery features higher throughput per volume" saves the day.
    In so far as it allow to do plenty of the obvious solution:
    Going down to lower speeds since "halving speed quarters friction"
    (and distributing speed differences over several stacked interfaced).


    The up to 1000x lower friction trick:


    Nanosystems page 174 bottom (7.3.5.c.):
    "As discussed in chapter 10, Δk_a/k_a can be made small in bearings of certain classes, …"
    Nanosystems page 292 (10.4.6.c.):
    "… For first-row atoms (taking carbon as a model), Δk_a/k_a ≈ 0.3 to 0.4 (at a stiffness-per-atom of 1 and 10 N/m respectively)
    where d_a = 0.25 nm, and ~0.001 to 0.003 where d_a = 0.125nm. This value of d_a cannot be physically achieved in coplanar rings, but it correctly modeles a ring sandwiched between two other equidistant rings having d_a = 0.25nm and a rotational offset of 0.125nm."


    Given Figure 10.9. (page 285 before) I think "coplanar rings" here actually refers to "cocylindrical rings".


    – d_a is the (possibly virtual) interatomic spacing along the bearings circumference
    – k_a is the stiffness-per-unit-area of the bering interface
    – Δk_a is the peak-to-peak variation of this stiffness due to "alignment bands" (spacial corrugation interference periods)


    I totally don't understand how only halving the
    (virtual) inter-atomic spacing can make the friction drop by almost a 1000x ?
    That is: How halving d_a makes Δk_a/k_a go down by a factor of 1000x.
    Anyone any idea?


    Also I wonder if this would work with the offset being applied to atoms that are situated
    further axially along on the same surface if they are (sideways) coupled stiffly enough (like alternating rows).
    That option would remove the hassle of needing to design radially wedged bearings.
    Anyone knows?


    Motivation:

    Showing how the scaling law for throughput is the main factor driving friction losses down.
    And creating awareness about this scaling laws unoverstatable importance.


    I eventually want halfway reasonable friction-loss-numbers for
    an animation visualizing the (IMO much too unknown) scaling law of
    "higher throughputdensity of smaller machinery"
    Preliminary state here:
    YouTube+APM again


    There was an article on Erics former website featuring this scaling law, but
    I can't even find it on the Internet Archive anymore.
    So I started making this animated 3D-model.


    Further plans for making charts that are relevant for nanoengineering:

    I'd like to eventually get to a chart that takes into account desirable deliberate deviation from keeping absolute speeds constant over scales.
    A chart where halving speed comes with a doubling of area (and proportional productive machinery volume) such that overall throughput stays the same.
    This leads to:
    – many parallel lines each for a specific total throughput.
    – linear rather than quadratic lines (half the slope in double-log chart)
    – absolute power loss on the y-axis ~> getting back to powerloss-per-area is seriously confusing I found
    There are a lot of details so I leave that for an eventual later post.


    Transport dissipation losses in soft nanosystems:

    Viscous drag being so much higher is expectable.
    [VD] https://web.archive.org/web/20…om/p/04/03/0322drags.html


    But this does not show inferiority of soft nanosystems in terms of efficiency, as I've initially wrongly assumed.
    What one would want to compare here is diffusion transport.
    But the speed of diffusion transport being distance dependent makes it difficult to compare.
    For a desired diffusion speed this would I presume need assumptions on
    – spacial pit-stop-membrane crossing frequency
    – energy dissipation per pit-stop
    – size of transported particles
    Maybe something to look into …


    Attached:
    My preliminary version of the compiled friction results chart.
    I may eventually publish the gnuplot source to github.

    New video is out since mid February 2022:


    Well, It's just a sneak preview for something
    – with smooth animation from pre-rendering
    – proper example numbers and
    – spoken explanation


    No clue how long till I have something
    at the current pace this goes nowhere …


    Copied over from the video description:


    Here's a sneak preview for a video that I'm planning.*THIS NOT A PROPOSED SYSTEM*It only serves to illustrate an important scaling law that IMO deserves more public awareness.For future cog-and-gear style diamondoid nanotechnology the high surface area of all the many bearings at the nanoscale (causing friction losses) is not a show-stopping problem, as is sometimes assumed. One main reason for that is:A very small volume of productive nanomachinery already suffices for practical levels of throughput.So why is only very little volume of nanomachinery sufficient?Every layer shown in the animation processes the exact same amount of product per time. But the layers get successively thinner.So the lowermost ultra-thin layer with nanomachinery at the very bottom processes the exact same amount of product per time as the macroscale robot in the big fat box at the very top. All layers but the nanomachinery layer at the very bottom can be made optional and stripped away. But even if they are left in they do not contribute all that much to friction. This goes into a rabbit-hole of details ... which I'll omit.But the smaller and smaller robots here operate faster and faster the reader might argue. Well, no. They don't. Note this deceptive fact: While frequencies increase when going down the layers, the absolute speeds actually stay unchanged/constant. Everything operates at the natural frequency for its scale in this illustration here (which is not a proposed system btw).So in summary the teased on *SCALING LAW* reads as follows: **Smaller machinery features higher throughput per volume (higher throughput density). A linear scaling law. Half size means double throughput density.***MATH:* http://apm.bplaced.net/w/index.php?ti...Reviews appreciated.*BONUS:* And if you "buy" lower speed of nanomachinery (absolute speeds not frequencies) by "paying" with a higher quantity (volume) of nanomachinery (keeping throughput constant) then friction losses still fall further. That is because dynamic friction losses scale down quadratically with sliding speed while friction losses scale up only linearly with the quantity (volume) of nanomachinery. Taking this into account one gets closer to an actually proposable system.Keywords:* APM atomically precise manufacturing* diamondoid nanomachineryMusic by SuperLinuxAudioGuru - License CC0https://youtu.be/lxwcNZWLDwQTHE SHOWN ARCHITECTURE IS NOT A PROPOSED SYSTEMIt only serves to illustrate an important scaling law that deserves more public awareness.— — —★ Public APM forum: https://sci-nanotech.com/★ My Twitter: https://mobile.twitter.com/mechadense★ My homepage: https://mechadense.github.io/00.Home-...★ My wiki: http://www.apm.bplaced.net★ Support me: https://www.patreon.com/mechadense

    Regarding the topic of material stiffness:
    I've since cleared up a long held misunderstanding of mine.


    While stiffness indeed shrinks with falling size (to levels that make the softest jelly envious - I'm to lazy to dig up example numbers right now) inertial masses shrink in such a way that they exactly compensate this falling stiffness when speeds are kept constant across scales (or equivalently: when operation frequencies rise linearly with falling size - which is a somewhat natural assumption).


    So problems with mechanical ringing stay unchanged from macroscale to nanoscale. But …


    … But one actually wants to slow down since one both can-afford-to and need-to do so.
    – "afford-to" relates to the enormously beneficial scaling law for troughput-density
    – "need-to" relates to surprisingly high friction-losses-per-area of superlubrication even at moderate speeds
    More on that elsewhere eventually … (more cleared up misunderstandings of mine there)
    Anyways, presumably out of these (here not explained, and in the book Nanosystems not explicitely stated) reasons the typical/majority-of operating speeds of nanomachinery in the book Nanosystems are intentionally proposed quite low at around 4 to 5 mm/s.


    So the macroscopic analogy would be a hypothetical …
    – machinery that one can afford to operate 1000x slower while retaining the same product throughput
    – material with stiffness of diamond but two digit prozentual elasticity before a break
    So much for all those saying/preaching that "things change for the worse" when using cog&gear style nanomachinery at the nanoscale . Availability bias at work …


    So keeping macroscopic prototypes for nanoscale target systems conservative in their assumptions on stiffness despite them being made from very low stiffness plastics seems trivial. Not something to worry about.
    Holds for all but the first assembly level. See next paragraph.
    In fact such prototype systems are so far lower in performance against ringing that it might be problematic in the other way.
    That is: Nanostructures could be made much more filigree.
    Macroscale 3D-printed plastic prototypes may suffer from unavoidable over-engineering.
    While they will still work at the nanoscale they will be far from ideal/optimized.


    Well, low stiffness not being a worry only holds as long as there is no piezomechanosynthesis involved.
    (Piezomechanosynthesis as in tool-tip preparation and the first assembly level from moieties to crystolecules).
    Beefy high-stiffness-from-geometry-structures at the nanoscale are really only needed to counter deflections from thermal motion
    since these deflections are many orders of magnitudes larger than the deflections from machine accelerations at the smallest scales.


    At the second assembly level (assembling from crystolecules to microcomponents) deflections from thermal motions
    are already quite probably pretty much negligible. There is:
    – a large number of parallel acting high stiffness bonds
    – only about kT of energy for the lowest order bending modes of the whole crystoleculear structure (typically many thousands of atoms)
    – compensatability by self centering
    So I wouldn't expect a need to focus on choosing maximally stiff geometries there.
    Well, to be absolutely sure I still need to check actual numbers …


    Side-note: There are other reasons beside stiffness to go for parallel manipulators.
    Like e.g. a larger number of pathways for mechanical motion threading via chains or such.

    University of Oxford:
    "Postdoctoral Research Assistant in DNA Nanotechnology Applied to Molecular Additive Manufacture": link
    (Only applications received before 12.00 midday on 15 June 2018 can be considered.)


    More funding for APM by AMO: link
    ( AMO ... Advanced Manufacturing Office link )


    "Evaluating Future Nanotechnology:
    The Net Societal Impacts of Atomically Precise Manufacturing"
    Steven Umbrello, Seth D. Baum
    Global Catastrophic Risk Institute,
    http://gcrinstitute.org
    (2018-04-28): link


    Meetup group: League of Extraordinary Algorithms -- Special Topic: Molecular Manufacturing (past - was on 2018-05-12) -- link

    Paper title:
    "On the effect of local barrier height in scanning tunneling microscopy:
    Measurement methods and control implications"
    https://doi.org/10.1063/1.5003851



    I'll try to summarize in a reader friendly way.
    (I found abstract and conclusion of this paper not really satisfying.)



    One of the most serious issues with current STM microscopes
    is their tendency to fail on some more severe surface features like e.g.
    chemically highly reactive sites including dangling bonds.



    (Since I've once worked with an STM (omicron) I know that pain all too well.
    Exactly where it gets interesting one gets all those "shadows" where the feedback control fails.)



    A simple formula for the tunneling current in STM's is like follows:
    i = c*V * exp( -d * delta * sqrt(phi) )
    where:
    0) c*V ... some constant
    1) d ... another constant = 10.25 sqrt(eV)/nm
    2) i ... tunneling current
    3) delta ... tunneling gap length in nm and
    4) phi ... arithmetic average between probing-tip and sample work functions



    Logarithmized this equation becomes:
    ln(i) = ln(c*V) -d * delta * sqrt(phi)



    In the usual operation range for the occurring delta values, phi is mostly independent of delta.
    Written in differential form:
    d_phi/d_delta ~= 0



    Thus one can differentiate the logarithmized equation to:
    phi = (d_ln(i)/d_delta)^2



    So from the squared slope of the logaritmized tunneling current one can determine the work function average. Since this value is position dependent one can obtain a local-work-functon-image better known as local-barriere-height (LBH) image.



    Now here's the problem:



    Reversely, varying the current always in the same way, meaning independent of the LBH at the current position, produces different variation amplitudes of delta depending on the local work function. What most current (2018) STMs are using is PI control that uses exactly that position independent constant gain. And this is what regularly gets them into an unstable regime (a regime where the actual current widely detours from desired current) at locations with low LBH. This is leading to the aforementioned "shadows".



    Here's what the papers authors did to solve the issue:



    They superimposed a "high" frequency dithering signal (dither frequency was 4kHz) onto the unprocessed feedback signal such that they could determine the LBH based on the resulting current variations. (This part was not new.)



    Then they use the gained LBH value to continuously (LBH estimation bandwidth was 400Hz)
    re-tune the DC gain of the STM's PI controller. Re-tune the the proportional P part. (PI Feedback bandwidth was 300Hz.)



    As a side-note: They used some alternative implementation of a lock in amplifier including second order band pass filters and first order Lyapunov filters. They write that they have outlined details about that in one of their preceding papers.



    The results:



    (Fig. 5.):
    (1) Significant reduction of the unwanted correlation of the LBH images with the topography images.



    (Fig. 6):
    (2) At the usually wanted and or necessary high gain settings near the stability limit, sudden drops in LBH (like in case of dangling bonds) do no longer lead to PI control breakdown. The old "solution" of reducing the overall gain led to more tip-sample crashes (especially in lithography mode) due to less sensitivity and smaller bandwidth.



    (Note: The shadows are no crashes. They are more like over-retracts. When lowering DC gain to reduce these shadows, this is when one gets crashes.)



    They write that the usual assumption of the "gap modulation method" is that the delta dithering amplitude is constant because the modulating frequency is beyond the controller bandwidth. ("gap modulation method" == established method of feedback dithering for LBH image generation)
    They write that this assumption does not always hold. Especially for fast-scanning high-bandwidth scanners.
    And I take (my interpretation reading between lines) they mean the problem was not solved till now because this problem was overlooked.


    All this was done with big slow macroscopic piezo based STMs.
    (An in house own design STM of Zyvex and an omicron STM for comparison.)



    So we are left to wonder how much this will do for fast and lightweight MEMS based STMs.


    PS: Here's some news coverage with video:
    https://www.nanowerk.com/nanot…ogy-news/newsid=49386.php

    Primary press release: https://www.tum.de/en/about-tu…ses/detail/article/34408/
    Paper (walled): http://science.sciencemag.org/content/359/6373/296
    Video: (long url)
    Direct link to video: https://www.youtube.com/watch?v=K9fuSVaszyg


    I was very much waiting for these news.
    This will go into my list of archived milestones in the incremental path.


    Next up, to widen the data bottleneck, this needs to be electrically parallelized and combined with (nontrivial) nanomechanical demultiplexing.
    All that while reducing self assembly failure rate and further extending and improving improving (the already demonstrated) convergent/hierarchical self assembly capabilities.


    The emergence of unconventional biomineralization research is also a milestone I'm eagerly waiting for.
    (Unconventional in the sense of not trying to recreate strong (but non AP) composite materials like mica but trying to create less strong but more versatile pure and AP single crystals of desired shape.)


    PS: some media coverage:
    nanowerk; nextbigfuture; kurzweilai

    Name: "Mechanical Computing Systems Using Only Links and Rotary Joints"
    (Submitted on 10 Jan 2018)
    by Ralph C. Merkle, Robert A. Freitas Jr., Tad Hogg, Thomas E. Moore, Matthew S. Moses, James Ryley


    https://arxiv.org/abs/1801.03534


    (There was a preceding report: http://www.imm.org/Reports/rep046.pdf)


    Essential points:
    +) extreme simplicity, only two elements, links and and 2D rotary joints
    +) (as the paper says): "All parts of the system can remain permanently connected and yet still provide all necessary combinatorial and sequential logic"


    It is not mentioned in the paper like this but I think the idea (Fig.3) can be interpreted a bit more abstractly as such:
    The locks provide a singular mutual dead center point where one gets an additional "singular DOF".
    (Does "singular DOF" make sense? I mean a point where two DOFs cross and one can switch between the two systems. There might be a relationship to holonomic constraints: https://en.wikipedia.org/wiki/Holonomic_constraints ? Or not.)
    This one additional "singular DOF" allows for temporally decoupling the downstream logic from the upstream logic without actually detaching parts (which would likely cause vibrations), and thus allows for:
    1) "repeated buffer power refresh in a pipeline by clocking" and
    2) "(reversible) latch memory" in sequential logic. (referring low density memory, not particularly to to high density memory like Fig.16)


    3D printing demo models (as suggested in the paper - Fig.22) would be cool, but even pretty simple things (beyond the depicted 3D modeled test part) like a basic one bit full adder (Fig.6) are already pretty darn big in their maximally compressed form (which, I take, is depicted in Fig.6).
    Btw: Even basic universal gates NOR or NAND (NAND Fig.5 is double sized combo with negated logic I think) are already composite structures in this "calculus".
    Naively composing these composites (e.g. to the aforementioned full-adder) without a then following simplification step makes the results even bigger. (It's an uncompressed form).


    Making models by cutting links form bottle plastic (HDPE / PET) with scissor and hole puncher may be viable and cheap.


    Some side-notes / observations:+) IIRC Nanosystems mentions that dissipation from sliding rotation scales worse than dissipation from sliding translation. But I think this is still much better than rod logic.
    +) Friction force and thus dissipation too (dissipated_energy = force * path * friction_coeff) is in first approximation independent of area (at least on the macroscale). So if force is kept constant (not pressure as in the usual case!!) then a bigger superlubricating bearing should perform not much worse than a single sigma bond bearing. Shouldn't it? I'm oversimplifying. I definitely need to more thoroughly re-read the paper "Evaluating the Friction of Rotary Joints in Molecular Machines" Ref[11]
    https://arxiv.org/abs/1701.08202.
    +) IMO the single-sigma-bond-bearings partially destroy the benefit of radiation hardness which the massive links provide (mentioned in intro section 5.1).
    That needs to be quantized.
    +) The current design (Fig.24) with its stark size mismatch between bearings and links feels rather prone to overtones, ringing, sidewards wiggling, ... .
    I could be wrong there.
    +) Maybe the two preceding points are just a matter of optimization focus:
    Minimal dissipation (design as presented - cost in radiation hardness and size)
    Maximal radiation hardness (bigger bearings - cost in dissipation and size)
    Maximal compactness (smaller links - cost in radiation hardness and dissipation due to falling link stiffness)
    +) Flex logic (Fig.23) seems even better for the nanoscale but bad for 3D printed models with quickly wearing high dissipation plastics.
    +) I'm a bit puzzled why they've chosen pushing instead of pulling. (Well it doesn't really matter much.)
    +) The transmission lines move in a reciprocative manner (just like in rod logic) ... (Had to add "reciprocative". I totally love that word.)
    +) Related mechanism: https://en.wikipedia.org/wiki/Whippletree_(mechanism)
    +) Bonus for mentioning Konrad Zuse and his Z1


    I still haven't read all the way through the paper.
    So there's a chance I've missed some important points that I would like to point out especially.


    PS: I was notified about this via a google alert I set up for "atomically precise manufacturing"
    This guided me to: https://boingboing.net/2018/01…s-made-purely-of-joi.html
    Also I found some discussion here: https://news.ycombinator.com/item?id=16129830

    Great,
    IMO one of the most advanced and interesting things that are currently going on
    are the SW projects of Christian Schafmeister.
    Here's a presentation:


    Then there also is:
    http://cadnano.org/
    Which I haven't yet looked into.


    As for "Nanoengineer-1", AFAIK it has been abandoned by the original developers.
    Eric Drexler somewhere did mention a few details about the exact reasons (can't find it right away).
    I think the development ran into a dead end probably due to a combination of
    * unready SW technology (especially GUI side) and
    * technical debt


    We really need functional reactive programming (FRP).
    As far as I can see this is the only way to cleanly separate logic from interface
    and have GUI programs composable instead of wheel-reinvent throwaway.
    (That would change orders of magnitude in productivity.)


    NE-1 was picked up by Bruce Allen ***
    http://moleculardynamicsstudio.blogspot.com/


    But:
    * I think this might now go in a bit different direction. Straying away from APM.
    It seems the focus went more to highly accurate predictions for research than for
    robust feature rich development & exploratory engineering.
    * NE1 is a conglomerate of deeply imperative base libraries => I'd wish for pure FRP wrappers. -- But that is not a goal.


    ----


    Nov. 2016 I had some mail traffic with Tom Moore ( http://machine-phase.blogspot.co.at/ )
    about APM related SW in general, installing NE-1 on Ubuntu, and what I would concretely wish for in particular.
    Here's a crop-out of what I wrote:


    "In the long run I want to see detail mipmapping, seamless zoom-out to bulk limit approximation,
    black box subsystems with physical geometry and logistic IO parameters (lazy deferrable!).
    And maybe polar logarithmic multi-scale view: ..."


    I think system level focus this is not really where *** is going.


    I also noted the major point that auto-equilibration is not scalable in NE-1 (major weak point)
    and pondered about the internal implementation and ways to improve this.
    Gotta look into the equilibration source-code sometime.


    ----


    "A couple of months in the laboratory can frequently save a couple of hours in the library."
    What a funny reverse formulation.
    It's about avoiding the tedious rediscovering of unknown knowns.


    But that sometimes works in reverse just as well.
    "A couple of months in the library can frequently save a couple of hours in the laboratory."
    Then it's about avoiding the tedious reconfirming of known unknowns.


    ----


    Other things coming to mind regarding molecular modelling software: nanohive, boinc, ...
    & the preparation for colour 3D printing of models I have done but with which's process I'm not very happy with yet.

    Yeah, I've got info about this via the lifeboat foundation blog's google+ channel.
    Here: (A)
    # https://plus.google.com/+Lifeb…ndation/posts/7tAm7K4KVLn
    # https://lifeboat.com/blog/2017…e.com&utm_campaign=buffer
    # https://scienmag.com/scientist…le-of-building-molecules/
    # https://www.nature.com/nature/…672/full/nature23677.html
    But your link has some info-graphics.


    A bit earlier another remotely related thing came in there too.
    This article here: (B)
    # https://plus.google.com/+Lifeb…ndation/posts/RtY1nQDJjwC
    # https://m.phys.org/news/2017-0…s-predefined-regions.html
    # http://science.sciencemag.org/content/357/6356/eaan6558


    I'm a bit upset about the authors of the article (A).


    Have you read the very first sentence of their abstract here:
    http://www.nature.com/nature/j…html?foxtrotcallback=true
    and checked their leading three references?


    Citation: "It has been convincingly argued 1, 2, 3 that molecular machines that manipulate individual atoms, or highly reactive clusters of atoms, with Ångström precision are unlikely to be realized."


    NNGNNH! ... Time for analysis why this happened ... again.


    The fact that we still cite these minus sign bearing references suggests that we may still lack knowledge about both:
    # the "modern" far term goal of gemstone based nanofactories and ...
    # the incremental-path towards them.


    (Side-note: I use "we" as in "majority" here since the authors of A are not the only ones citing these references).


    Or it may suggest, that we may ruthlessly dismiss all those ideas that are not viewed by a large group as bringing (with high likelihood) economic returns in our own lifetimes. (Side-note: Personally I'm agnostic in regards to this last point.)


    The work done in (A) (one of the R&D fronts of soft nanomachines) is in some respects (solution phase thermally driven assembly) relatively near to the incremental path. In some other respects very far though (utterly different far term goal - hard/stiff nanomachines ve eternal limitation to soft/compliant nanomachines)! Nonetheless (or exactly because of that) the recent amazing progress that was made in climbing the "stiffness ladder" in "hinged foldamer nanomechanics" seems to have flown totally under the radar.


    I guess we are still standing way to close in front of the "bark" of the "soft nanomachine tree".


    The R&D fronts of soft nanomachines are:
    # by basically everyone well known for not leading up to advanced gemstone based APM and ...
    # by many again and again confused (or claimed equal out of fear**) with the incremental path.


    The incremental path is utterly different in its treatment of soft nanomachines as just a means to get away from those soft nanomachines ASAP and thus actually does lead to advanced APM systems.


    **Part of the problem might also be conformist group-think where we try to keep our own reputation maximally safe by aggressively marginalizing our colleagues. Even if we personally would not aggressively disagree.


    (Sorry for ranting.)

    I recently commented on this topic there:
    https://plus.google.com/+Lifeb…n/posts/V1ouSsTkA8C?hl=de
    I'll repeat that comment here slightly adjusted to fit the context of this discussion.


    I found these two open papers:
    informal overview:http://www.scs.illinois.edu/bu…hlights/pub32_Science.pdf
    details: http://www.scs.illinois.edu/burke/files/pubs/pub32.pdf
    (I've only read through the first one in detail yet.)


    Here's what this "molecule making machine" is about in essence:


    The the "Suzuki coupling" is used to make carbon-carbon bonds
    https://en.wikipedia.org/wiki/Suzuki_reaction
    like so: ...≡C-B(OH)2 + Br-C≡... - > ...≡C-C≡... + waste
    To continue after a first reaction one needs pre-delivered building block molecules that are at least double capped
    like so: Br-C≡...≡C-B(OH)2
    But this would self polymerize. That is it would form "infinite" chains. So the solution found was to temporally cap (chelate) the boron end with a molecule called "trivalent N-methyliminodiacetic acid (MIDA fro short)" like so:
    http://www.sigmaaldrich.com/ch…ights/mida-boronates.html
    For purification it was found that the MIDA caps on the growing product molecules can be locked/released to/from silica particles by different solvents.


    MARTINE hit the nail on the head with his comment.

    I'll put it in different words:


    Limitations that are obviously present:


    Since this is conventional chemistry and not mechanosynthesis the yields are low and the error rates are high. Every synthesis step can easily have losses in the single digit percentual range. Thus this is only suitable for small molecules (as the work says itself) and does not scale up very far.


    Limitations of this process that I'm not totally clear about are:


    Can this synthesis be done hierarchically or only serially? (Hierarchically would extend scalability a bit.)


    The minimal size and available shapes of the pre-delivered molecules (some shown in the more technical pdf) limit what can be made. The more informal pdf says that they can create loops – (How?). Given the shapes of the building block molecules sown in the technical pdf I highly doubt the loops can be made maximally tight and maximally close together. In other words I highly doubt polycyclic diamondoid cages can be synthesized (not to speak of strained ones).


    Out of this reason I doubt that this process can be used for the synthesis of some of the small tool-tips for diamondoid mechanosynthesis (like e.g. DC10c). (These tool-tip molecules would be small enough to not suffer too much from the yield decline problem of conventional non mechanosynthetic chemistry.)


    In light of "early APM" I suspect this technology could still could be useful for:
    * The synthesis of light activated "motor molecules"
    * The synthesis of novel side chains for foldamers (for boosting stiffness / symmetry / covalent cross linking / ...)
    * ...


    With early APM I'm referring to "Coarse-block APM systems" page 33:
    https://energy.gov/sites/prod/…ntation%20-%20Drexler.pdf
    And "modular molecular composite nanosystems" MMCNs
    (are these two deemed identical by Drexler ??)


    I'm NOT referring to what is shown in the INFAPM workshop video!
    That is (as I currently see it) just about the "System-level tech demos" dead end in the diagram (also page 33).


    ----


    Automated synthesis will by now certainly have gone a long way from where it was in the 1980's, but yes it's still on the very beginning (accelerating progress here). I think cheap 3D-printed microfluidics has the potential to give this area (and the area of foldamer nanosystems) a major boost.

    "Least problematic focus" appears to be a goal without a solid reason. If the audience is presumed to be that emotionally sensitive, exposure to non-linear mapping is probably an effort in futility.

    True, for a such a super sensitive audience that's likely "small-minded" enough to confuse physical size with significance, exposure to non-linear mapping may lead to severe psychological breakdown (exaggerating joke of course).


    I guess I have the problem of too many options and a lack of criteria for narrowing them down. Choosing the least problematic focus or better: avoiding the most problematic focus seemed to be a good starting point.


    Old buildings are an interesting idea, I hadn't that one.
    I guess the one most widely known of these is the "Great Pyramid of Giza" best viewed form the perspective of "Pyramid of Menkaure". https://mrgris.com/projects/me…b6b7dd3@29.97271,31.12854
    A bit of an issue with these old buildings is that while they themselves are interesting around them usually is not very much interesting stuff present for a long stretch. This leaves a gap in the map. I had the idea of the European "North Kap" but that had the same problem there. Also these very old buildings usually are not imaged with super high resolution (unlike e.g. city centers) so they become blurry on the maximally magnified end of the map.


    I guess balancing interestingness on all size-scales is a good criterion.
    This translates to looking for a kind of fractal succession. Maybe I should look here:
    https://en.wikipedia.org/wiki/…inuously_inhabited_cities
    since fractal coastlines (not referring to fractal dimension here as usually done) seem to have been a catalyst for "early" seafaring civilization.


    Maybe: Greece Athens "Sunken Lake" (with caves unexplored to this day).
    https://en.wikipedia.org/wiki/Vouliagmeni#Lake_Vouliagmeni
    Viewing coordinates: 37.807075,23.786131
    https://mrgris.com/projects/me…b6b7dd3@37.80707,23.78613


    This place near my home: 48.132664,16.392811 has kind of a fractal situation going on.
    https://mrgris.com/projects/me…b6b7dd3@48.13266,16.39281
    (btw: the Alps make a nice bow in this view)
    I could motivate using a place in my hometown Vienna by arguing that Erwin Schrödinger was born there.
    But actually I don't want to focus on specific people.
    There's a (rather un-pretty) place named: "Schrödingerplatz"
    Viewing coordinates: 48.241081,16.437511 or 48.240963,16.437373
    https://mrgris.com/projects/me…b6b7dd3@48.24108,16.43751
    The perspective with that center lines up Vienna's four water-ways nicely.


    Two further ideas I came up by using the topic of APM/MNT as a rough guideline criterion:


    The "Swarovski Kristallwelten" exposition in Austria Wattens:
    https://en.wikipedia.org/wiki/…allwelten_(Crystal_Worlds)
    Viewing coordinates: 47.294340,11.600365
    https://mrgris.com/projects/me…b6b7dd3@47.29434,11.60036
    Topical focus is actually rhinestone though not gemstone.


    The "Atomium" in Belgium Brussel:
    https://en.wikipedia.org/wiki/Atomium
    Viewing coordinates (from south-west): 50.894463,4.340803
    https://mrgris.com/projects/me…0b6b7dd3@50.89446,4.34080
    (This perspective from south west is best since it shows all continents and no building shadows)
    The topical focus is actually nuclear technology though.
    The surrounding landscape is rather flat, wheat field paved and bland.
    Wikipedia (en) says: Distributing pictures of the Atomium is only legal since last year. What?!

    I found an interactive version of nonlinear mapping:
    https://mrgris.com/projects/merc-extreme/
    (Tip: turn on satellite view)


    What would be the most visually interesting while least politically, religiously, ideologically problematic place to focus at for a screenshot demonstrating that kind of the mapping?


    This is already hard. Additionally asking for a place that can be easily identified by most of the worlds population is probably too much and likely leaves no results.

    I just wanted to share some recent developments I find relevant:


    Two papers about interlocking DNA nanostructures:
    https://www.researchgate.net/p…es_for_Functional_Devices
    https://www.nature.com/articles/ncomms12414
    While still far from atomic resolution and still a little jelly floppy like this totally goes into the direction of stiff atomically precise nanorobotics.
    I hope that will convince more people that macroscale style machinery actually makes perfectly sense in the nanoscale.


    An image out of this paper made it into Wikipedia.
    I think it's the first CC licensed image of 3D wiremesh DNA stuff on Wikipedia:
    https://commons.wikimedia.org/…DNA_origami_rotaxanes.jpg
    Assuming the uploader "Materialscientist" was one of the creators (unlikely - comment to high res upload says "from PDF")
    or had permission from the creators or the upload is tolerated which is not clear from what I find.
    I'd love to use that image but as long as I'm not clear about the license I rather not.


    On the microfluidics front there's a new website for open source R&D collaboration (created by MIT Media Lab).
    https://metafluidics.org/
    I see how this could notably accelerate progress if adopted by the subset of struct-DNA-nanotec (& other foldamer) researchers/developers.