Beiträge von lsuess

    Request for forum "folders":

    • APM specific basics (Basics)
    • far term productive nanosystems (Far Goal)
    • far term products (Applications)
    • near term path (Path)
    • speculative economical and ecological consequences (Impacts)
    • off-topic misc (Misc)

    Currently the folder "Impacts and Applications" mixes up two of them.


    Due to it's importance I've moved this comment to its own thread:

    Related to this quite some time ago I extracted this flow-chart out of the minimal toolset paper:
    http://www.molecularassembler.com/Papers/MinToolset.pdf

    In the diagram above this chart corresponds to the points:

    • moiety preparation
    • moiety routing
    • tooltip cycle

    This is only the most minimal toolset. You can begin to see the complexity here.
    The red ones are rate limiting gas pase steps which still need better solutions.
    Note that I had to decide for a lot of tradeoffs in this visualisation.
    It is thus far from perfect and definitely improvable.
    (License: Public Domain)

    Attached files yEd-tooltip-cycle-graph_v1.00.graphml (297.3 KB)

    Some thoughts:

    • Is it easy to make make backups with this forum SW?
    • Since this seems to be not a mainstream forum software migrating might become difficult in the future.
    • With AI cracking captchas spam protection becomes a nightmare - my website was attacked pretty soon - worry ...
    • (Where do you host it?)
    • I've contacted ~30 people (austria) which might be interested checking out this forum.
    • ... I might PM you about the last point sometime but not now ...

    ---

    • I realized that the search function could be made less stringent. I quickly ran in a 30sec timeout.
    • The list on the left is hard to read - lack of spaces makes it difficult to see whether its just a second line or the next topic.
    • the quoting button doesn't always work
    • (are sticky notes possible?)
    Zitat von Lukas (LSUESS)

    Then there's the point that a static target (e.g. a chemically bond hydrogen atom) is totally unsuitable:
    See what I noted in the comments of this video (~ 6 months ago - citation omitted)
    http://www.youtube.com/watch?v=cdKyf8fsH6w&t=20m30s
    >> Here Ralph merkle sidemotes: "... atoms have nuclei which are point-masses and yeah they blur a little bit but who cares forget about it ..."
    While irrelevant for APM - if we attempts "shoot-and-hit-fusion" we do care.
    I do repeatedly get the classic question why those nano-machines don't quantum disperse and get useless. That's a reason for this comment.

    Just for completeness in case that youtube comment vanishes in the future:

    Zitat von Lukas (LSUESS)

    aka mechadense

    20:30 About quantum blurryness - another take on this is: If you capture a molecule in a tight space (e.g. in a box) you do work against degeneracy pressure which is released in "omnidirectional" kinetic energy when you suddenly lift its spacial constraints completely. The thighter it was compressed in space the faster it's probability distribution will fall apart. This is heisenbergs uncertainty principle in slightly unconventional wording. (small spacial distribution -> wide impulse distribution -> fast wave packet dispersion) Judging from this macroscopic crystals which's average outer positions can (it has been done) be measured down to the femtometer level in space should fall apart instantly because of this extreme sharpness in space. But they don't! Why is an interesting question on itself - it has todo with not yet well understood quantum decoherence. If you strongly bond a small molecule to the crystal (that is you use the crystal as a movement constraining box) the molecule essentially becomes part of the crystal and inherits its sharp and not apart-running position. Actually the macroscopic position of the crystals roughly pins down the positions of the atomic nuclei of the molecule. The exact positions of the nuclei (at 0K temperature) can't be determined as exact as the position of a macroscopic crystal though. The actual size of the probability distribution cloud for a nucleus is maintained through the "chemical bond force box" the size of this cloud is below the size of its host atom but above the size of the nucleus. As a sidenote the size of the nucleus is maintained through its "nuclear force box" and the size of a whole atom (electron shell) is maintained through the "electrostatic core potential box". To theoretically recreate the actual "force-pictures" that have been taken of molecules you have to "add" to the core crystal location that does not run apart the nucleus blurryness, the electron shell blurrieness and finally some thermal blurrieness - the same for the opposing needle tip. (actually mathematically this "adding" is folding)

    I think we have a case of "separation of concerns" here:
    A.) phase space volume (PSV) compression
    B.) acceleration
    C.) focussing
    Keeping those apart should make solving them easier.
    I think combining any of these should only be done if a compelling reason is found.
    It seems to make sense to do it in the order A-B-C.


    Regarding A.) PSV compression

    I'm not entirely clear what you where trying to simulate.
    I guess the evolution/propagation of the wave function of one single proton.
    A simple simulation assumes perfect knowledge over the wave function - pure state (is that correct?)
    Thus no matter how you choose your wave function it always has the minimal phase space volume (hbar).
    Then when assuming a certain initial spacial localisation the packet won't spread faster than the minimum speed.

    In reality though you have limited knowledge over the wave function and thus you have a PSV which is bigger than hbar even for a single particle. Thus I guess you have to do a more complex simulation over an ensemble of possible wave-functions. (Is that the right way to simulate the lack of knowledge over the wave function?)
    In other words additionally to the quantum blurriness some knowledge uncertainty adds to it: (Density operator)
    https://en.wikipedia.org/wiki/Density_matrix
    I wonder: Is this operator normally only applied to bunches of particles?
    Annealing e.g. will only work with more than one particle.
    So I’m not really sure how to compress PSV here.


    Regarding B.) acceleration

    While accelerators which are sufficient to boost ions to fusion energies can be relative small** (m scale) in relation to particle accelerators (km scale) I think they'll always remain big relative to the nano-scale.
    [** Neutral Beam Injection for ITER >900tons! - whopping 1000keV though]

    The most compact way to accelerate ions with APT that I currently know of are optical cavity accelerators:
    http://www.quantumday.com/2013…accelerator-built-on.html
    Interesting video: https://www.youtube.com/watch?v=LG1kVIIy2Ok
    Basically analogue to these microwave accelerators just finer but still with similar total length.
    https://commons.wikimedia.org/…e:Desy_tesla_cavity01.jpg
    Trouble: the small channel (dx) => big dp in transversal directions :S => B-A-C ??

    Side-note: The energy barriers for fusion are not hard to calculate but I just found that tables for them are somewhat missing on the net. Wikipedia: "... ignition temperature is about 4 keV for the D-T reaction and about 35 keV for the D-D reaction."


    Regarding C.)

    That's the only part of the three (A,B,C) you actually seem to have simulated.
    You seem to do make a choice of some arbitrary initial localisation resulting in an corresponding equally arbitrary (but corresponding) dispersion speed. (Choosing e.g. the size of the proton in its nuclear force "box" 1.7e-15m makes no sense - that should be obvious though.) The only size that I see that’s special here is the radius of the acceleration ring. Widest starting slowest dispersion. (?? output size of an acceleration channel ??)


    >> JimL: "The system isn't closed because, classically at least, a charged particle moving through the focussing fields will experience acceleration and thus radiate energy away. If it weren't for quantum mechanics the nuclei would most likely first settle into an ever tighter beam line and then eventually slow to a stop, since there are field potentials along every axis."

    I'm not entirely sure I get what you're trying to say here.

    While you can increase or decrease a particles energy with external electric or magnetic fields I'm not sure it is possible to make its energy more precise - that is lowering its entropy or in an uncommon formulation quasi cooling it at a hot temperature - (A&C intermixed - bad?). With electromagnetic fields this is surely possible as laser-cooling shows.
    It might have to do with that:
    a.) magnetic fields preserve energy (except synchrotron radiation at high energies)
    b.) electric fields preserve (... what was it again? - darn I forgot).
    (Side-note: I think this is applied to visualize local spots of the table of nucleotides with a Wien filter.)


    ps: The necessary massive parallelism and high frequency for a practical level of power generation will make this endeavour even more difficult.

    Ah APT and nuclear fusion - an interesting topic!
    Btw: (offtopic) Wendelstein 7-X is starting soon :)

    >> JimL: "... realized I was about to embark on an original research project that would take a long time. ..."
    Yep it's completely uncharted area there.

    >> JimL: "... So first thing I felt I had to do was write some simulation code ... That was a mistake - at least if I had planned to complete the article in any reasonable time frame. ..."
    You indeed picked one of the most difficult topics there are - regarding applications of advanced APM products.

    I’m not sure if it is even in principle possible to archive "shoot and hit" fusion. What I mean with this just invented term is a fusion of a pair* of nuclei with a probability >>50% to archive a successful hit on the first try. (I assume this is what you where trying to investigate here.) I see no possibility for exploratory engineering here - too many missing reliable models.

    IIRC I once heard something along the lines that: "one cannot focus a particle beam (in position space) below what it was at the point of origin because of Liouville's theorem of constant phase space volume (of closed systems?)"**
    https://en.wikipedia.org/wiki/Liouville's_theorem_%28Hamiltonian%29
    I once came in contact with people working on shooting highly charged ions through fine channels.
    I think someone of them said that this could maybe cheat this unfocussability limitation.
    http://www.iap.tuwien.ac.at/www/atomic/surface/capillaries
    (I doubt this is applicable here.)

    I think aforementioned limitation** is a oversimplification though - since laser-cooling & something like evaporation cooling can AFAIK compress phase space in free(?) position space. There seem to be further methods as a quick search just now revealed: See "Conservation of emittance" - this may be related:
    https://en.wikipedia.org/wiki/…Conservation_of_emittance

    It certainly won't hurt to have the minimal possible volume in phase space (that is planks volume hbar) at the release point from machine phase to free flight.

    Then there is the point that the starting impulse must be incredibly accurate such that you actually accurately hit the target. This equates to compactness in impulse space and consequently wide dispersion in position space.

    The result: nano-scale focussing wont work since you have to start with micro to macro-scale wave functions. You have to start with a super-cool and spatially widely de-localized proton/deuteron/.. and accelerate its whole wave-function in an incredibly noise free way to get it cold and hot at the same time (in different directions - spherically symmetric??).

    It may help simulate the process in reverse (backward in time) to find out what you need to start with.
    The goal situation seems relatively simple: plank constant & target nucleus size -> projectiles minimal necessary impulse uncertainty at the focus ... (don't forget coulomb repulsion + small errors)

    An interesting question regarding this that I found is: Could supercooled ion trap boxes of a size of some microns to centimetres be used to transport only a part of the wave function of an ion? Some method of transport with even lower friction than super-lubrication may be necessary. I'm collecting my thoughts about such levitation/hyper-lubrication here:
    http://apm.bplaced.net/w/index.php?title=Levitation

    If a "shoot-and-hit-fusion" attempt is really successful I see no chance in hell that you'll be able to control the exit trajectory. Catching high entropy exit particles without messing up the super-cool environment seems very difficult. If you're that good at things you may start to think about efficiently producing antimatter - which I today consider 100% Si-Fi.

    Depending on the choice of fusion partners you may need a third partner to receive the released energy in form of impulse and prevent purely radiative energy release (which seems hardest to capture)

    Then there's the point that a static target (e.g. a chemically bond hydrogen atom) is totally unsuitable:
    See what I noted in the comments of this video (~ 6 months ago - citation omitted)
    http://www.youtube.com/watch?v=cdKyf8fsH6w&t=20m30s
    >> Here Ralph merkle sidemotes: "... atoms have nuclei which are point-masses and yeah they blur a little bit but who cares forget about it ..."
    While irrelevant for APM - if we attempts "shoot-and-hit-fusion" we do care.
    I do repeatedly get the classic question why those nano-machines don't quantum disperse and get useless.
    That's a reason for this comment.

    An interesting question related to "shoot-and-hit-fusion" that should also be useful for other things is how to best ionize atoms (e.g. hydrogen) and contain those ions in an advanced APT system.

    All in all I think putting a lot of effort into "shoot-and-hit-fusion" isn't helping speed up development of APM.
    It is super complex and seems far off and thus pulls too much on the suspense of disbelieve. Thus its not on my personal priority list.

    ~~~~

    Beside the yet very speculative "shoot-and-hit-fusion" there will be much more unspeculative possibilities to use APM to boost conventional fusion approaches.
    I'm collecting my thoughts about this here - only a bulleted list yet:
    http://apm.bplaced.net/w/index.php?title=Nuclear_fusion

    I think inertial fusion could be made much more compact (no idea how much exactly) and could go easily beyond break-even. Crashing nuclei with their electron hull makes things much more complicated on the simulation side.
    I'd expect severe limits on the downscalbility because of physical scaling laws though.

    Ater some back on the envelope calculations. I think that stellerator style fusion won't get light enough for e.g. a mobile spaceship (where fusion makes IMO most sense) even with usage of APT materials.


    ~~~~

    >> JimL: "What annoys me the most, though, is ... I can't find my old C and Python code ..."

    I too have lost something APM related once. It was the newest version of this poor little fella:
    Particularly I've lost the work I've done to fit it inside itself in its collapsed state.
    I made this model when I still didn't know the shortcomings of the molecular assembler concept.

    ~~~~

    some remotely related links:
    https://en.wikipedia.org/wiki/Lanthanum_hexaboride --- interesting image
    https://en.wikipedia.org/wiki/Field_electron_emission
    https://en.wikipedia.org/wiki/Electron_gun

    Attached files

    >> ... I found very few [videos] of any educational value. ...

    What is certainly and especially missing are videos along the lines of what E.Drexler does in his new book "Radical Abundance"
    # Chapter 5 "The Look and Feel of the Nanoscale World"
    # Chapter 10 "The Machinery of Radical Abundance"
    I plan to tackle that and more (with as much graphics as possible) in the "basics" and "tour through nanofactory" sections I mentioned. The missing videos in this area are the reason why I wrote:
    >> me: "... I want to present for the first time the already existing knowledge of nano-factories in a well illustrated way that ..." and why I want to make these videos in the first place.

    For the topic "products of nanofactories" there are some videos out there but not much:
    I especially like J. Storrs Hall "whether machine"-video ...
    part1: https://www.youtube.com/watch?v=EOPsczPlzzY
    part2: https://www.youtube.com/watch?v=Fd63OMosnq0
    ... because he focusses on the more overlooked side of applications that I'm especially interested in.
    It's imo one of the more speculative applications though.
    Btw: I'm collecting APM related topics that I think are unjustifiably under-represented here:
    http://apm.bplaced.net/w/index.php?title=The_usual_suspects

    >> ... The closest I thought that came to having useful material of use to scientists, engineers, and technologically literate audience is this one by Ralph Merkle: ...

    As chance has it I re-watched this one just a few days ago since its the first link on R.Merkles homepage.
    (details off-topic -> omitted)

    >> ... I did review several hundred, sorted first by view count, then by viewer ratings, and lastly by most recently uploaded. ...

    Wow, quite a bit of effort ...

    If it comes to general introductory videos there are quite a view out there. I'm collecting the best introductory videos I occasionally find at the bottom of my wikis mainpage. See here:
    http://apm.bplaced.net/w/index…Manufacturing_Wiki#Videos
    (I've never checked the view-counts though - viewer rating may be not too important if the views are plenty?)

    Are there any you have missed in there?

    There are more in depth videos about APM related topics but they are mostly too technical for the general audience like:
    The foresight conference videos (mostly near term topics) of which I found some rather interesting:
    https://vimeo.com/foresightinst/videos
    This video about mechanosynthesis is really great but also not suitable for the general audience:
    https://www.youtube.com/watch?v=705raszSLGA


    >> A couple months ago I did search Youtube for videos where the word "nanotechnology" ...

    Have you read Drexlers "five types of nanotechnology" blog entry?
    http://metamodern.com/2014/04/…-kinds-of-nanotechnology/
    Basically the term "nanotechnology" is as specific as "makrotechnology" thus its no surprise that it got annexed.

    Out of this reason I also consider using the term "minifactory" instead of "nanofactory" relating to the size of the whole thing and not the size of its smallest components.
    Good Idea / Bad Idea?
    (I ditched personal factory, personal fabricator, living room factory)

    Choosing sensible nomenclature is a difficult task. I note my ideas about that topic here:
    http://apm.bplaced.net/w/index.php?title=APM_related_terms
    (It seems I need to reread "Radical Abundance" to find out whether Drexler refers with APM to the whole path or more to far term goal)

    Btw: I came up with the term Gemstome-Gum-*** or Gem-Gum-***
    *** = [Technology|Factory|Manufacturing|...]
    I think it fulfils the major requirements:
    1.) it is catchy (probable to be actually used)
    2.) it is accurate enough to be unannexable (the term provides a concrete example for a diamondoid meta-material - out of which nanofactories ad their products are mostly made)

    Your thoughts?

    In general I've mostly stopped using anything with the "nano" prefix for internet searches.
    When skimming for videos I often search for the names of the main persons in the realm of APM and filter for the newest material.
    In general now when I'm starting a conversation about APM I usually take the route over "advanced production technologies" starting with 3D printing and avoiding nano prefix altogether - it helps - the conversation does not immediately deteriorate into the sunscreen, lotus-spray or a similar direction.

    |||

    >> I think you may need to at least tell the viewer that later videos will explain the origin of the limitations of nano-factories.

    Thanks, I was at best unconciously aware of that. I certainly can't include any explanations in the introduction since they'll have too much size even if I compactify them as best I can (see below) but I'll add a note that I'll explain this later.


    >> ... I can't begin to guess what is possible and impossible. ... "wait - why can't I make biological materials? Is making a protein molecule not possible? What would be possible?"

    As I see it there is a combination of at least three reasons for why biological products (complex tissues not molecules like proteins) seem not practically producible with nanofactries:

    1.) the amount of mechanosynthetic situations encountered
    artificial: a vew diamondoid materials
    biological: tens of thousonds of types of molecules (and embeddings in ice*)

    2.) the non "diamondoidivity / gemstone likeness" of biological tissues
    artificial: stiff; ... biological: non stiff (VdW bonds between ice* and biomolecules too)

    3.) A very different "decompression chain" from blueprint to product
    artificial: high level 3D model -> triangle mesh or similar -> toolpaths -> low level actuator commands -> final arrangement of atoms (at room temperature)
    --- you get every time almost the same product from the same blueprint
    biological: DNA -> ribosomal protein production -> modifications through interactions with other proteins + a lot of usage of emergent behaviour -> final arrengement of atoms (in a shock frozen* snapshot)
    --- you get every time quite a bit of a different product from the same blueprint


    Point 1.) may be doable putting in lots and lots of additional effort beyond basic mechanosynthesis.
    Further continuous improvement beyond basic mechanosynthetic capabilities will go in this direction.
    Point 2.) should be doable too by forcefully stretching chain molecules and doing mechanosynthesis near the ends. Sufficient cooling and molecular-sorting-pump-like-vacuum-lockout right after production will probably be necessary. So after attainment of basic diamondoid mechanosysntesis it shouldn't be too hard to extend it with capabilities to produce e.g. pure sugar and some other similarly simple substances.

    Point 3.) is serious though:
    Atomically precise 3D scanning the product of a biological system [ which seems ridiculously difficult because of point 2.) in reverse where you can't choose what you find] and compressing it into a mechanistic nanofactory style blueprint would at best produce something with strange compression artefacts (like in an over-compressed JPEG image - while AI (in the sense of smart compression) is rather unrelated to basic APM capabilites here it may help a bit). For a perfect 1:1 copy representation you'd need to store the location of every atom - which in its most compact representation basically IS the product. Making copies while taking apart a shock frozen original (whatever you'd call that process - both "cloning" and "beaming" is very misleading) is imo not very sensible. I haven't thought about "divergent disassembly" for scanning as an analogue to "convergent assembly" yet - I'd guess the slicing process may slow down things severely.

    In conclusion I wouldn't go as far as to say that it is completely and utterly impossible to make a perfect 1:1 copy of a steak with a diamondoid nanofactory (on steroids) but I'm pretty sure it is for all practical purposes too far off and there are way more effective and way way easier (but still harder than basic synthesis of diamond) ways to make something that
    A.) on the makro-scale comes close enough to e.g. a steak that fulfils its purpose (nourishing healthy tasty and nice looking)
    B.) on the sub micro scale is actually completely different. (think APT based micro-scale ink-jet printer)

    Does that reasoning make sense - spot any errors?


    I'm collecting my thoughts about synthesis of food here:
    http://apm.bplaced.net/w/index.php?title=Synthesis_of_food

    ps: If nanofactories emerge from a long and twisted way through a series of steps of "pseudo bio technology" there will be remainders of earlier technology steps of this pseudo biological stuff (DNA origami & co) that are still producible. There may be motivation though to remove bootstrapping history such that nanofactories can be used in more extreme environments.
    I write down my thoughts about this here:
    http://apm.bplaced.net/w/index…external_limiting_factors

    ....

    >> ... a belated welcome ...
    Don't sweat it, I'm actually pleasantly surprised since I was expecting to wait at least half a month.

    >> I liked the idea you used to make the size of an atom comprehensible.
    Thanks, the idea may be good but I think the video needs improvement.
    Btw (off-topic): this works for the visualisation of the size of the earth too. Scale down a soccer field to hair-size and an equally scaled down model of the earth does comfortably fit onto a soccer field. Beyond that (solar system, galaxy and beyond) gaining an intuitive feeling for absolute size relations to everyday objects is imo impossible.

    ...

    Hi,

    I really want to understand nanofactories and since convergent assembly is arguable one of the most important aspects of them I need to get I tight grasp on it. I already found out quite a bit and wrote it down here:
    http://apm.bplaced.net/w/index.php?title=Convergent_assembly
    But there still are some major things that I do not understand - partly or in full.

    (*)Note: In the following when I'm going refer to "the main images of convergent assembly" I mean the four ones that can be found here:
    1) http://e-drexler.com/p/04/05/0609factoryImages.html
    2,3,4) http://www.zyvex.com/nanotech/convergent.html
    In case you're not aware: In these examples area branching and volume ratio steps are matching such that:
    [ equal throughput on all stages <=> equal operating speeds on all stages ]
    This seems reasonable for a first approximation.

    In simple math:
    Q2 = 1 s^3 f
    Q1 = 4 (s/2)^3 2f = 1 s^3 f
    => Q2 = Q1 :)
    s ... side-length
    f ... frequency
    Q2 ... throughput upper bigger layer (reference units)
    Q1 ... throughput lower smaller layer

    In main parameters (assuming constant speeds):
    * area branching factor = 4
    * volume upward ratio 1/(4x(1/2)^3) = 2
    * scale-step = 2

    !! please ask for details if that doesn't make sense to you !!




    General questions about C.A.:

    Question a)

    Why do all the "main images of convergent assembly"(*) go all the way up with the convergent assembly steps to the size of the whole nanofactory. This changes the nanofactory from a convenient sheet format to a clunky box.
    You can read here ...
    http://apm.bplaced.net/w/index.php?title=Convergent_assembly
    ... why I think that "higher convergent assembly levels quickly loose their logistic importance"

    What I wrote there too (for now under the headline "Further motivations" at the bottom) are some things that came to my mind about why the convergent assembly nevertheless goes up to the top in the main images of convergent assembly(*). These are:

    * more simple construction of overhangs without the need for scaffolds (stalactite like structures)
    * the automated management of bigger logical assembly-groups
    * the simpler decomposition into big standard parts that can be put together again in completely different ways
    * the possibility to keep everything in a vacuum till the final product release - this should not be necessary ***
    (Can you think of any more?)

    I don't deem any of them worthy enough though to sacrifice the nice sheet form factor that a nanofactory could have. It is clear that the bottom three convergent assembly steps (roughly: 1 mainly mechanosynthesis, 2 mainly radical surface zipping, 3 mainly shape locking) are absolutely necessary. But I'm not clear about the topmost convergent assembly stages -- they definitely do not increase the nanofactories speed so much is sure. (as reviewed above: cross sections at any hight have same throughput capacity)

    *** Vacuum lockout is a special topic easily big enough to start separate thread.
    late vacuum lockout: perfectly controlled environment <- one of Drexlers main points
    not so late vacuum lockout: enforce recyclable shape locking micro-components (~1um?) such that we don't end up with APM being the greatest waste producer in human history (I question whether this will be avoidable). Consider this line out of the productive Nanosystems promo video: "the only waste product is clean water and warm air" ... and oops we forgot the product when it is no longer needed .... add too much silicon and you can't even burn it normally - you'd get flame inhibiting slack. (edit: Well, "Radical Abuncance" does mention recycling briefly but it treats it more like magical a black box.) Btw: I'm currently working on a simple and elegant vacuum lockout system for arbitrary shaped micro-scale-blocks -- but that's a seperate topic ...


    Question b)

    Why do all the "main images of convergent assembly"(*) use a low area branching factor of four (amounting to side-length branching of two). As the (in relation to nanofactories stupidly low-tech) current day 3D printers nicely demonstrate way bigger step-sizes can lead to practical production times. Let me formulate it like this: who would build an (advanced) robot to just put 8 parts together ?! Also usually stuff does not come apart in very vew equally sized parts.

    Choosing a bigger step-size may be quite a bit slower than the absolute maximum possible (in case the bottom mill layers are pretty performant) but it has also two big advantages:
    1) designers will have probably less to think about the production process
    2) bigger steps make less steps - this is way easier to grasp for the human mind

    To elaborate on point two: Suppose we choose a step-size in side-length of 32 ~= sqrt(1000) ... (instead of the common two -- 32 is still way lower than what todays 3D printers do) ... then we get from 1nm (5C atoms) to 1mm in only four steps where each step has a comprehensible size ratio.
    like this: 1nm (1) 32nm (2) 1um (3) 32um (4) 1mm
    (When designing in this setting it seems not so far fetched anymore to actually hit the limits and run out of space. You can actually realize for the first time there’s not infinite space at the bottom - so to say.)

    Note that with bigger step-sizes the throughput balance stays perfectly in tact:
    In simple math:
    Q2 = 1 s^3 f
    Q1 = 16 (s/4)^3 4f = 1 s^3 f
    => Q2 = Q1 :)
    Main parameters:
    * area branching factor = 16
    * volume upward ratio = 1/(16*(1/4)^3) = 16
    * scale-step = 4


    Here is my supposition:
    The reason why 32-fold size steps are usually not depicted is probably because you can barely see three levels of convergent assembly on a computer screen then. But there's a way around this! There is a possibility to map the layers of a nanofactory such that one can see all the details on all scales equally well. I made an info-graphic on this quite a while ago but it turns out the straight horizontal lines are actually wrong.
    see here:
    https://www.flickr.com/photos/…800/in/dateposted-public/
    distorted visualisation of a nanofactory layer stack
     
    Recently I found Joackim Böttger's work which I think is rather relevant for the visualisation of convergent assembly configurations in nanofactories:
    http://www.uni-konstanz.de/grk…ople/member/boettger.html
    http://www.amazon.de/Complex-L…%C3%B6ttger/dp/3843901805
    http://graphics.uni-konstanz.d…20Satellite%20Imagery.pdf
    http://graphics.uni-konstanz.d…in%20Large%20Contexts.pdf

    I wrote a python program do kind of such a such a mapping. Here's an early result:
    https://www.flickr.com/photos/…363/in/dateposted-public/
    I may try to apply it on some screen-shots of this video:
    http://www.dailymotion.com/video/x4mv4t_zoom-into-hair_tech
    I also have further plans with this which would be too much for here though.




    Questions regarding uncommon forms of C.A.:

    There are two exceptions I know of which deviate from the "main images of convergent assembly"(*):
    I'll describe how I understand them below. If you spot some misunderstandings please point me to them.


    exception a)

    Nanosystems page 418 -- Figure 14.4.
    Main parameters:
    * area branching factor = 8
    * volume upward ratio 1/(8x1/8) = 1
    * scale-step = 2

    Drexler himself writes (capitals by me):
    "... This structure demonstrates that certain geometrical constraints can be met,
    BUT DOES NOT REPRESENT A PROPOSED SYSTEM".
    Here is how this looks like: http://www.zyvex.com/nanotech/images/DrexlerConverge.jpg

    If I understand it right this is because in this arrangement the throughput capacity rises with a factor of two with every iteration downward creating a veritably massive bottleneck (30 iterations -> factor 2^30~10^9) at the top.

    In simple math:
    Q2 = [8s^3] f = 8 s^3 f
    Q1 = 8[8(s/2)^3] 2f = 16 s^3 f
    => Q1 = 2*Q2 .... oops X(


    exception b)

    The convergent assembly in Chris Phoenix's nanofactory design that he describes here:
    http://www.jetpress.org/volume13/ProdModBig.jpg
    I am not talking about the geometrical design decisions but the main parameters of the chosen convergent assembly. In this regard it is completely identical to Drexlers (UNproposed) configuration in Nanosystem Figure 14.4. and that on ALL! stages since it has:

    lower (stratified) stages: (chosen geometry not arbitrary scalable as Chris Phoenix points out himself)
    * area branching factor = 3x3 (-1 redundant normally unused) = 8
    * volume downward ratio (8x1/8)/1 = 1
    * scale-step = 2
    upper 3D fractal stages:
    * area branching factor = 8 (+1 redundant normally unused)
    * volume downward ratio (64x1/8)/8 = 1
    * scale-step = 2

    Major error here ??

    Unless I am misunderstanding something I do spot a major error in reasoning here. The reasoning goes as follows: You want the throughput capacity of the very bottom stage upped to compensate for the slowness of this single stage of general purpose "fabricators".

    BUT: deducing from this that continuing this approach further up the stages helps even more is actually incorrect. Doing this is actually detrimental. The reason: All further stages are super fast since they only have to assemble eight blocks to one 2x2x2 block thus this leads to the aforementioned upward tightening funnel in throughput capacity. While the stage right after the fabricators is seriously overpowered or equivalently under-challenged at some point up the stack the load starts to fit the capacity and from there on out the funnel takes effect. In spite of this exponential funnel situation the top stage still looks totally practical - which is nothing but amazing and once again proofs the insane potential of nanofactories.

    What I think actually is necessary for a more throughput to throughput-capacity matched design is much much more parallelism in the bottom layer. When you stack those fabricators and thread by the finished parts fast some similarity to mill-style crops up here - which may not be too suprising.
    (Such stacking may be necessary in one or two more stages - due to e.g. slow surface radical zipping - but that should be it -- that's as I understand the reason why the lowest three convergent-assembly-layer-stacks are actually becoming thinner going upward - as can be seen in the productive nanosystems video)

    Imo in Chris Phoenix's nanofactory text separating the abstract convergent assembly details from concrete geometric implementation details and other stuff could have been done better. I may go through the trouble and crop out and cite the dispersed relevant pieces if requested.

    What also bothers me is that although this is supposed to be a practical design it adheres rather closely to the very small side-length doubling steps which I've tried to argue against above.

    As I've seen Chris Phoenix is member of this forum.

    @Chris Phoenix:
    If you happen to read this post I would be delighted to hear your thoughts about this.
    Please check if you see any errors in my reasoning.
    If not how you would modify your design?




    Fine-tuning:

    There are I think two main reasons to slightly deviate from the good first approximation of constant speed on all stages I've spoken of above.

    Reason a)
    At the bottom:
    * limit in spacial mechanosynthesis density - manipulators are necessarily bigger than atoms
    * limit in temporal density - slow down to prevent excessive friction heat since the big bearing surface area compensates even the big super-lubrication benefit
    (these two are rather well known)

    Reason b)
    I have an idea which I call "infinitesimal bearings" - See here:
    http://apm.bplaced.net/w/index…tle=Infinitesimal_bearing
    This should allow us to cheat on the constant speed on all sizes rule especially in the mid sized area (0.1um .. 0.1mm).
    Here's a maybe interesting observation:
    To get a throughput capacity funnel that is widening in the upward direction (which will certainly be needed but has seemingly never been discussed) one needs a low area branching factor and a high volume upward ratio.
    What would be the optimal geometry for this?
    (____) widening -- (layered) constant-- (3D-fractal) tightening
    This somewhat reminds me on space topologies: elliptical, plane, hyperbolic ....
    Note any type can be forced in any geometric configuration for a few stages.




    And finally to the biggest mystery of all I've encountered so far:

    There's this discussion by J.Storrs Hall which I still need to chew through thoroughly.
    Its about the scaling law for the replication time of proposed nanofactories.
    It is actually mismatching the scaling law for replication observed in nature by an order of magnitude(?).
    See here:
    http://www.imm.org/publications/reports/rep041/

    I think this is super relevant!
    Some open questions regarding this:
    easy: How would this look in a stratified configuration?
    easy: How much is it "unscalable" in this configuration?
    hard: How can the essence of this be visualized in a less abstract more intuitive way?
    That is: Why does nature choose to do so?
    ....

    ---------------------------------------
    ps: Please excuse for the size of this post but I wanted to have all the stuff of convergent assembly together to form a complete picture.

    Hello,

    First of all many thanks for the new forum :)
    I've quickly read through all the posts and have seen that there is interest in making youtube videos.

    Actually I'am planning to do so for quite a while now.
    It started with a talk about APM that I gave last year.
    ( https://cfp.linuxwochen.at/de/LWW14/public/events/115  )

    Since I had only about sixteen listeners :( I thought about making the slides into a youtube presentation,
    so that all the work wouldn't go to waste.

    I started out with about 40 slides and improved on them.
    Beside collecting relevant images I made many svg info-graphics by myself.
    A view of them, can be seen here:
    https://www.pinterest.com/luka…y-precise-nanotechnology/

    The number of slides grew and grew and I've now ended up with a about 200 of them (all german atm) - still growing.
    Sadly I realized just very recently that static slides are a catastrophe for youtube - way to boring.
    See this test-video catastrophe: https://www.youtube.com/watch?v=-Y60-80X7q4
    the same **** in german: https://www.youtube.com/watch?v=JoFHHtl7S38
    (I’m well aware that there is much more wrong with these videos than just the single static image)

    As a consequence I plan to switch my focus to making screen-cast videos where I draw stuff and drag and scale images (probably accelerated video with pre-recorded audio) - this way there is more movement on the screen and the viewer always knows where to put her/his attention. (Making animations would be waay too much effort.)

    With the slides I ended up with five big main parts which are:
    * the basics of working in the small
    * a bottom up tour through a nano-factory (as a sensible far term goal - not as easy an easy to reach goal)
    * the products of a nano-factory (with focus on solving the great civilisation problems)
    * the path to the nano-factory (current relevant developments)
    * some possible ecological and economical consequences and miscellaneous

    I recently formulated some brand new text for the overall introduction video.
    As a side-note: I want the introduction to be so easy that anyone’s grandparents can understand most of it.

    Here it is:
    (excuse spelling errors I quick & dirty translated it right now)
    (I'd be pleased to hear your thoughts about this)

    ##################

    Welcome,

    [arouse interest]
    Here I want to introduce you to a technology that has greater potential to enrich our world than all achievements of mankind to up until the present day.

    [minimal definition]
    Specifically this is about a device that can produce all things that you need in your daily live. And that extremely cheap or even completely free. This device is so small that it comfortably fits on a table and so quiet and odour-less that you can run it in your living-room.

    [product materials]
    All the often gaily coloured or super stylish items that come out of this nano-factory consist of very special materials. Although they consist out of tiniest gemstone pieces they can behave for example like rubber. This is however only one concrete example. In the big whole there are gazillions of new material properties possible which from today’s view appear either utterly uncommon or alien. There are limits though. Biological products like real beef cannot be produced. For this a very different technology is necessary.

    [Building material]
    Your personal nano-factory of course needs building material. This it can even filter from completely normal air.
    To use air as building material your nano-factory needs a lot of energy though. Here this energy comes from a solar-cell-foil. Whereas this foil again also is made from your nano-factory. With that the circle closes. Instead of air you can also run your nano-factory with other easily attainable substances. In this case there is often more energy contained in the building material than you need to run the nano-factory. The nano-factory than works like a generator and it can feed back the excessive energy into the grid or pump it into very special energy storages. I believe you can now correctly guess how you get those very special energy storages.

    [demarcation(?)]
    Attention: This is not about what today is called "nanotechnology" in the media and also not about swarms of self reproducing nano-robots of the kind of which you can read in some science fiction literature. Instead this is about factual existing up-to-date knowledge about those nano-factories.

    Even if we can't yet build such a nano-factory this doesn't rule out that we can understand major properties of it.
    To find trustworthy statements about a future nano-factory though without having the possibility to make direct tests or measurements on it we must obey strict discipline. First we are only allowed to use well tested theoretical models and second in all the estimations(?) we do with these models we need to be very careful. In other words: We always need to leave ourselves big safety margins. If we - under strict abidance of these rules - analyse a rough model of a nano-factory we see something astonishing. In spite of the consequent pessimistic estimations we get enormously promising values both for the performance of a nano-factory and the performance of its products.

    [topic & target audience]
    In this series I want to present for the first time the already existing knowledge of nano-factories in a well illustrated way that is not only accessible for scientists but for the average technologically interested person.

    [benefit for the audience & call for action] ~~improvable~~
    In this compact introduction I have barely scratched the topic "nano-factory". If you decide to accompany me to dive down deeper in the depths of this technology you can expect an orientation help for the case that I could motivate you to help with the building of the first nano-factory. And on the other side you can look forward to an extremely seldom shown image of the future which is not based on the usual suspects which would be: >>first<< far from reality science fiction >>second<< advertisement for short sighted profit oriented research and development and >>third<< Reports of all the seemingly ineluctable future catastrophes in the public media. In other words you can look forward to a picture of the future which markedly deviates from the traditionally rehashed forecasts for the future.

    [orientation help]
    If now a nano-factory sounds to fantastic for you I recommend you to start at the "path to the first nano-factory".
    If you are impatient and want to know more about the new possibilities which open up with such nano-factories I propose you start with the "products of a nano-factory". If you're interested in the inner processes of a nano-factory then start with the "tour through a nano-factory". And if you want to take your time to hear the whole thing starting from the beginning start with the "basics". At the end I keep myself open a point for speculations about environmental and economic consequences plus further mixed topics.

    [call-for-action & thanks & dismissal]
    This video series is a work in progress. Please be patient. If I could spark your interest please subscribe to my you-tube channel. I’m always happy about constructive questions and comments. I should also probably point out that the majority of what I’m going to present here is not my own work. Thus I’m going to specify the used sources to the best of my knowledge and belief. If you managed to endure to this point I thank you for your attention.

    ######

    Btw: I'm not really happy with the term "nano-factory" but that's for another topic.