>> I think you may need to at least tell the viewer that later videos will explain the origin of the limitations of nano-factories.
Thanks, I was at best unconciously aware of that. I certainly can't include any explanations in the introduction since they'll have too much size even if I compactify them as best I can (see below) but I'll add a note that I'll explain this later.
>> ... I can't begin to guess what is possible and impossible. ... "wait - why can't I make biological materials? Is making a protein molecule not possible? What would be possible?"
As I see it there is a combination of at least three reasons for why biological products (complex tissues not molecules like proteins) seem not practically producible with nanofactries:
1.) the amount of mechanosynthetic situations encountered
artificial: a vew diamondoid materials
biological: tens of thousonds of types of molecules (and embeddings in ice*)
2.) the non "diamondoidivity / gemstone likeness" of biological tissues
artificial: stiff; ... biological: non stiff (VdW bonds between ice* and biomolecules too)
3.) A very different "decompression chain" from blueprint to product
artificial: high level 3D model -> triangle mesh or similar -> toolpaths -> low level actuator commands -> final arrangement of atoms (at room temperature)
--- you get every time almost the same product from the same blueprint
biological: DNA -> ribosomal protein production -> modifications through interactions with other proteins + a lot of usage of emergent behaviour -> final arrengement of atoms (in a shock frozen* snapshot)
--- you get every time quite a bit of a different product from the same blueprint
Point 1.) may be doable putting in lots and lots of additional effort beyond basic mechanosynthesis.
Further continuous improvement beyond basic mechanosynthetic capabilities will go in this direction.
Point 2.) should be doable too by forcefully stretching chain molecules and doing mechanosynthesis near the ends. Sufficient cooling and molecular-sorting-pump-like-vacuum-lockout right after production will probably be necessary. So after attainment of basic diamondoid mechanosysntesis it shouldn't be too hard to extend it with capabilities to produce e.g. pure sugar and some other similarly simple substances.
Point 3.) is serious though:
Atomically precise 3D scanning the product of a biological system [ which seems ridiculously difficult because of point 2.) in reverse where you can't choose what you find] and compressing it into a mechanistic nanofactory style blueprint would at best produce something with strange compression artefacts (like in an over-compressed JPEG image - while AI (in the sense of smart compression) is rather unrelated to basic APM capabilites here it may help a bit). For a perfect 1:1 copy representation you'd need to store the location of every atom - which in its most compact representation basically IS the product. Making copies while taking apart a shock frozen original (whatever you'd call that process - both "cloning" and "beaming" is very misleading) is imo not very sensible. I haven't thought about "divergent disassembly" for scanning as an analogue to "convergent assembly" yet - I'd guess the slicing process may slow down things severely.
In conclusion I wouldn't go as far as to say that it is completely and utterly impossible to make a perfect 1:1 copy of a steak with a diamondoid nanofactory (on steroids) but I'm pretty sure it is for all practical purposes too far off and there are way more effective and way way easier (but still harder than basic synthesis of diamond) ways to make something that
A.) on the makro-scale comes close enough to e.g. a steak that fulfils its purpose (nourishing healthy tasty and nice looking)
B.) on the sub micro scale is actually completely different. (think APT based micro-scale ink-jet printer)
Does that reasoning make sense - spot any errors?
I'm collecting my thoughts about synthesis of food here:
ps: If nanofactories emerge from a long and twisted way through a series of steps of "pseudo bio technology" there will be remainders of earlier technology steps of this pseudo biological stuff (DNA origami & co) that are still producible. There may be motivation though to remove bootstrapping history such that nanofactories can be used in more extreme environments.
I write down my thoughts about this here:
>> ... a belated welcome ...
Don't sweat it, I'm actually pleasantly surprised since I was expecting to wait at least half a month.
>> I liked the idea you used to make the size of an atom comprehensible.
Thanks, the idea may be good but I think the video needs improvement.
Btw (off-topic): this works for the visualisation of the size of the earth too. Scale down a soccer field to hair-size and an equally scaled down model of the earth does comfortably fit onto a soccer field. Beyond that (solar system, galaxy and beyond) gaining an intuitive feeling for absolute size relations to everyday objects is imo impossible.
I really want to understand nanofactories and since convergent assembly is arguable one of the most important aspects of them I need to get I tight grasp on it. I already found out quite a bit and wrote it down here:
But there still are some major things that I do not understand - partly or in full.
(*)Note: In the following when I'm going refer to "the main images of convergent assembly" I mean the four ones that can be found here:
In case you're not aware: In these examples area branching and volume ratio steps are matching such that:
[ equal throughput on all stages <=> equal operating speeds on all stages ]
This seems reasonable for a first approximation.
In simple math:
Q2 = 1 s^3 f
Q1 = 4 (s/2)^3 2f = 1 s^3 f
=> Q2 = Q1
s ... side-length
f ... frequency
Q2 ... throughput upper bigger layer (reference units)
Q1 ... throughput lower smaller layer
In main parameters (assuming constant speeds):
* area branching factor = 4
* volume upward ratio 1/(4x(1/2)^3) = 2
* scale-step = 2
!! please ask for details if that doesn't make sense to you !!
General questions about C.A.:
Why do all the "main images of convergent assembly"(*) go all the way up with the convergent assembly steps to the size of the whole nanofactory. This changes the nanofactory from a convenient sheet format to a clunky box.
You can read here ...
... why I think that "higher convergent assembly levels quickly loose their logistic importance"
What I wrote there too (for now under the headline "Further motivations" at the bottom) are some things that came to my mind about why the convergent assembly nevertheless goes up to the top in the main images of convergent assembly(*). These are:
* more simple construction of overhangs without the need for scaffolds (stalactite like structures)
* the automated management of bigger logical assembly-groups
* the simpler decomposition into big standard parts that can be put together again in completely different ways
* the possibility to keep everything in a vacuum till the final product release - this should not be necessary ***
(Can you think of any more?)
I don't deem any of them worthy enough though to sacrifice the nice sheet form factor that a nanofactory could have. It is clear that the bottom three convergent assembly steps (roughly: 1 mainly mechanosynthesis, 2 mainly radical surface zipping, 3 mainly shape locking) are absolutely necessary. But I'm not clear about the topmost convergent assembly stages -- they definitely do not increase the nanofactories speed so much is sure. (as reviewed above: cross sections at any hight have same throughput capacity)
*** Vacuum lockout is a special topic easily big enough to start separate thread.
late vacuum lockout: perfectly controlled environment <- one of Drexlers main points
not so late vacuum lockout: enforce recyclable shape locking micro-components (~1um?) such that we don't end up with APM being the greatest waste producer in human history (I question whether this will be avoidable). Consider this line out of the productive Nanosystems promo video: "the only waste product is clean water and warm air" ... and oops we forgot the product when it is no longer needed .... add too much silicon and you can't even burn it normally - you'd get flame inhibiting slack. (edit: Well, "Radical Abuncance" does mention recycling briefly but it treats it more like magical a black box.) Btw: I'm currently working on a simple and elegant vacuum lockout system for arbitrary shaped micro-scale-blocks -- but that's a seperate topic ...
Why do all the "main images of convergent assembly"(*) use a low area branching factor of four (amounting to side-length branching of two). As the (in relation to nanofactories stupidly low-tech) current day 3D printers nicely demonstrate way bigger step-sizes can lead to practical production times. Let me formulate it like this: who would build an (advanced) robot to just put 8 parts together ?! Also usually stuff does not come apart in very vew equally sized parts.
Choosing a bigger step-size may be quite a bit slower than the absolute maximum possible (in case the bottom mill layers are pretty performant) but it has also two big advantages:
1) designers will have probably less to think about the production process
2) bigger steps make less steps - this is way easier to grasp for the human mind
To elaborate on point two: Suppose we choose a step-size in side-length of 32 ~= sqrt(1000) ... (instead of the common two -- 32 is still way lower than what todays 3D printers do) ... then we get from 1nm (5C atoms) to 1mm in only four steps where each step has a comprehensible size ratio.
like this: 1nm (1) 32nm (2) 1um (3) 32um (4) 1mm
(When designing in this setting it seems not so far fetched anymore to actually hit the limits and run out of space. You can actually realize for the first time there’s not infinite space at the bottom - so to say.)
Note that with bigger step-sizes the throughput balance stays perfectly in tact:
In simple math:
Q2 = 1 s^3 f
Q1 = 16 (s/4)^3 4f = 1 s^3 f
=> Q2 = Q1
* area branching factor = 16
* volume upward ratio = 1/(16*(1/4)^3) = 16
* scale-step = 4
Here is my supposition:
The reason why 32-fold size steps are usually not depicted is probably because you can barely see three levels of convergent assembly on a computer screen then. But there's a way around this! There is a possibility to map the layers of a nanofactory such that one can see all the details on all scales equally well. I made an info-graphic on this quite a while ago but it turns out the straight horizontal lines are actually wrong.
Recently I found Joackim Böttger's work which I think is rather relevant for the visualisation of convergent assembly configurations in nanofactories:
I wrote a python program do kind of such a such a mapping. Here's an early result:
I may try to apply it on some screen-shots of this video:
I also have further plans with this which would be too much for here though.
Questions regarding uncommon forms of C.A.:
There are two exceptions I know of which deviate from the "main images of convergent assembly"(*):
I'll describe how I understand them below. If you spot some misunderstandings please point me to them.
Nanosystems page 418 -- Figure 14.4.
* area branching factor = 8
* volume upward ratio 1/(8x1/8) = 1
* scale-step = 2
Drexler himself writes (capitals by me):
"... This structure demonstrates that certain geometrical constraints can be met,
BUT DOES NOT REPRESENT A PROPOSED SYSTEM".
Here is how this looks like: http://www.zyvex.com/nanotech/images/DrexlerConverge.jpg
If I understand it right this is because in this arrangement the throughput capacity rises with a factor of two with every iteration downward creating a veritably massive bottleneck (30 iterations -> factor 2^30~10^9) at the top.
In simple math:
Q2 = [8s^3] f = 8 s^3 f
Q1 = 8[8(s/2)^3] 2f = 16 s^3 f
=> Q1 = 2*Q2 .... oops
The convergent assembly in Chris Phoenix's nanofactory design that he describes here:
I am not talking about the geometrical design decisions but the main parameters of the chosen convergent assembly. In this regard it is completely identical to Drexlers (UNproposed) configuration in Nanosystem Figure 14.4. and that on ALL! stages since it has:
lower (stratified) stages: (chosen geometry not arbitrary scalable as Chris Phoenix points out himself)
* area branching factor = 3x3 (-1 redundant normally unused) = 8
* volume downward ratio (8x1/8)/1 = 1
* scale-step = 2
upper 3D fractal stages:
* area branching factor = 8 (+1 redundant normally unused)
* volume downward ratio (64x1/8)/8 = 1
* scale-step = 2
Major error here ??
Unless I am misunderstanding something I do spot a major error in reasoning here. The reasoning goes as follows: You want the throughput capacity of the very bottom stage upped to compensate for the slowness of this single stage of general purpose "fabricators".
BUT: deducing from this that continuing this approach further up the stages helps even more is actually incorrect. Doing this is actually detrimental. The reason: All further stages are super fast since they only have to assemble eight blocks to one 2x2x2 block thus this leads to the aforementioned upward tightening funnel in throughput capacity. While the stage right after the fabricators is seriously overpowered or equivalently under-challenged at some point up the stack the load starts to fit the capacity and from there on out the funnel takes effect. In spite of this exponential funnel situation the top stage still looks totally practical - which is nothing but amazing and once again proofs the insane potential of nanofactories.
What I think actually is necessary for a more throughput to throughput-capacity matched design is much much more parallelism in the bottom layer. When you stack those fabricators and thread by the finished parts fast some similarity to mill-style crops up here - which may not be too suprising.
(Such stacking may be necessary in one or two more stages - due to e.g. slow surface radical zipping - but that should be it -- that's as I understand the reason why the lowest three convergent-assembly-layer-stacks are actually becoming thinner going upward - as can be seen in the productive nanosystems video)
Imo in Chris Phoenix's nanofactory text separating the abstract convergent assembly details from concrete geometric implementation details and other stuff could have been done better. I may go through the trouble and crop out and cite the dispersed relevant pieces if requested.
What also bothers me is that although this is supposed to be a practical design it adheres rather closely to the very small side-length doubling steps which I've tried to argue against above.
As I've seen Chris Phoenix is member of this forum.
If you happen to read this post I would be delighted to hear your thoughts about this.
Please check if you see any errors in my reasoning.
If not how you would modify your design?
There are I think two main reasons to slightly deviate from the good first approximation of constant speed on all stages I've spoken of above.
At the bottom:
* limit in spacial mechanosynthesis density - manipulators are necessarily bigger than atoms
* limit in temporal density - slow down to prevent excessive friction heat since the big bearing surface area compensates even the big super-lubrication benefit
(these two are rather well known)
I have an idea which I call "infinitesimal bearings" - See here:
This should allow us to cheat on the constant speed on all sizes rule especially in the mid sized area (0.1um .. 0.1mm).
Here's a maybe interesting observation:
To get a throughput capacity funnel that is widening in the upward direction (which will certainly be needed but has seemingly never been discussed) one needs a low area branching factor and a high volume upward ratio.
What would be the optimal geometry for this?
(____) widening -- (layered) constant-- (3D-fractal) tightening
This somewhat reminds me on space topologies: elliptical, plane, hyperbolic ....
Note any type can be forced in any geometric configuration for a few stages.
And finally to the biggest mystery of all I've encountered so far:
There's this discussion by J.Storrs Hall which I still need to chew through thoroughly.
Its about the scaling law for the replication time of proposed nanofactories.
It is actually mismatching the scaling law for replication observed in nature by an order of magnitude(?).
I think this is super relevant!
Some open questions regarding this:
easy: How would this look in a stratified configuration?
easy: How much is it "unscalable" in this configuration?
hard: How can the essence of this be visualized in a less abstract more intuitive way?
That is: Why does nature choose to do so?
ps: Please excuse for the size of this post but I wanted to have all the stuff of convergent assembly together to form a complete picture.
First of all many thanks for the new forum
I've quickly read through all the posts and have seen that there is interest in making youtube videos.
Actually I'am planning to do so for quite a while now.
It started with a talk about APM that I gave last year.
( https://cfp.linuxwochen.at/de/LWW14/public/events/115 )
Since I had only about sixteen listeners I thought about making the slides into a youtube presentation,
so that all the work wouldn't go to waste.
I started out with about 40 slides and improved on them.
Beside collecting relevant images I made many svg info-graphics by myself.
A view of them, can be seen here:
The number of slides grew and grew and I've now ended up with a about 200 of them (all german atm) - still growing.
Sadly I realized just very recently that static slides are a catastrophe for youtube - way to boring.
See this test-video catastrophe: https://www.youtube.com/watch?v=-Y60-80X7q4
the same **** in german: https://www.youtube.com/watch?v=JoFHHtl7S38
(I’m well aware that there is much more wrong with these videos than just the single static image)
As a consequence I plan to switch my focus to making screen-cast videos where I draw stuff and drag and scale images (probably accelerated video with pre-recorded audio) - this way there is more movement on the screen and the viewer always knows where to put her/his attention. (Making animations would be waay too much effort.)
With the slides I ended up with five big main parts which are:
* the basics of working in the small
* a bottom up tour through a nano-factory (as a sensible far term goal - not as easy an easy to reach goal)
* the products of a nano-factory (with focus on solving the great civilisation problems)
* the path to the nano-factory (current relevant developments)
* some possible ecological and economical consequences and miscellaneous
I recently formulated some brand new text for the overall introduction video.
As a side-note: I want the introduction to be so easy that anyone’s grandparents can understand most of it.
Here it is:
(excuse spelling errors I quick & dirty translated it right now)
(I'd be pleased to hear your thoughts about this)
Here I want to introduce you to a technology that has greater potential to enrich our world than all achievements of mankind to up until the present day.
Specifically this is about a device that can produce all things that you need in your daily live. And that extremely cheap or even completely free. This device is so small that it comfortably fits on a table and so quiet and odour-less that you can run it in your living-room.
All the often gaily coloured or super stylish items that come out of this nano-factory consist of very special materials. Although they consist out of tiniest gemstone pieces they can behave for example like rubber. This is however only one concrete example. In the big whole there are gazillions of new material properties possible which from today’s view appear either utterly uncommon or alien. There are limits though. Biological products like real beef cannot be produced. For this a very different technology is necessary.
Your personal nano-factory of course needs building material. This it can even filter from completely normal air.
To use air as building material your nano-factory needs a lot of energy though. Here this energy comes from a solar-cell-foil. Whereas this foil again also is made from your nano-factory. With that the circle closes. Instead of air you can also run your nano-factory with other easily attainable substances. In this case there is often more energy contained in the building material than you need to run the nano-factory. The nano-factory than works like a generator and it can feed back the excessive energy into the grid or pump it into very special energy storages. I believe you can now correctly guess how you get those very special energy storages.
Attention: This is not about what today is called "nanotechnology" in the media and also not about swarms of self reproducing nano-robots of the kind of which you can read in some science fiction literature. Instead this is about factual existing up-to-date knowledge about those nano-factories.
Even if we can't yet build such a nano-factory this doesn't rule out that we can understand major properties of it.
To find trustworthy statements about a future nano-factory though without having the possibility to make direct tests or measurements on it we must obey strict discipline. First we are only allowed to use well tested theoretical models and second in all the estimations(?) we do with these models we need to be very careful. In other words: We always need to leave ourselves big safety margins. If we - under strict abidance of these rules - analyse a rough model of a nano-factory we see something astonishing. In spite of the consequent pessimistic estimations we get enormously promising values both for the performance of a nano-factory and the performance of its products.
[topic & target audience]
In this series I want to present for the first time the already existing knowledge of nano-factories in a well illustrated way that is not only accessible for scientists but for the average technologically interested person.
[benefit for the audience & call for action] ~~improvable~~
In this compact introduction I have barely scratched the topic "nano-factory". If you decide to accompany me to dive down deeper in the depths of this technology you can expect an orientation help for the case that I could motivate you to help with the building of the first nano-factory. And on the other side you can look forward to an extremely seldom shown image of the future which is not based on the usual suspects which would be: >>first<< far from reality science fiction >>second<< advertisement for short sighted profit oriented research and development and >>third<< Reports of all the seemingly ineluctable future catastrophes in the public media. In other words you can look forward to a picture of the future which markedly deviates from the traditionally rehashed forecasts for the future.
If now a nano-factory sounds to fantastic for you I recommend you to start at the "path to the first nano-factory".
If you are impatient and want to know more about the new possibilities which open up with such nano-factories I propose you start with the "products of a nano-factory". If you're interested in the inner processes of a nano-factory then start with the "tour through a nano-factory". And if you want to take your time to hear the whole thing starting from the beginning start with the "basics". At the end I keep myself open a point for speculations about environmental and economic consequences plus further mixed topics.
[call-for-action & thanks & dismissal]
This video series is a work in progress. Please be patient. If I could spark your interest please subscribe to my you-tube channel. I’m always happy about constructive questions and comments. I should also probably point out that the majority of what I’m going to present here is not my own work. Thus I’m going to specify the used sources to the best of my knowledge and belief. If you managed to endure to this point I thank you for your attention.
Btw: I'm not really happy with the term "nano-factory" but that's for another topic.