Hi,
I really want to understand nanofactories and since convergent assembly is arguable one of the most important aspects of them I need to get I tight grasp on it. I already found out quite a bit and wrote it down here:
http://apm.bplaced.net/w/index.php?title=Convergent_assembly
But there still are some major things that I do not understand - partly or in full.
(*)Note: In the following when I'm going refer to "the main images of convergent assembly" I mean the four ones that can be found here:
1) http://e-drexler.com/p/04/05/0609factoryImages.html
2,3,4) http://www.zyvex.com/nanotech/convergent.html
In case you're not aware: In these examples area branching and volume ratio steps are matching such that:
[ equal throughput on all stages <=> equal operating speeds on all stages ]
This seems reasonable for a first approximation.
In simple math:
Q2 = 1 s^3 f
Q1 = 4 (s/2)^3 2f = 1 s^3 f
=> Q2 = Q1 :)
s ... side-length
f ... frequency
Q2 ... throughput upper bigger layer (reference units)
Q1 ... throughput lower smaller layer
In main parameters (assuming constant speeds):
* area branching factor = 4
* volume upward ratio 1/(4x(1/2)^3) = 2
* scale-step = 2
!! please ask for details if that doesn't make sense to you !!
General questions about C.A.:
Question a)
Why do all the "main images of convergent assembly"(*) go all the way up with the convergent assembly steps to the size of the whole nanofactory. This changes the nanofactory from a convenient sheet format to a clunky box.
You can read here ...
http://apm.bplaced.net/w/index.php?title=Convergent_assembly
... why I think that "higher convergent assembly levels quickly loose their logistic importance"
What I wrote there too (for now under the headline "Further motivations" at the bottom) are some things that came to my mind about why the convergent assembly nevertheless goes up to the top in the main images of convergent assembly(*). These are:
* more simple construction of overhangs without the need for scaffolds (stalactite like structures)
* the automated management of bigger logical assembly-groups
* the simpler decomposition into big standard parts that can be put together again in completely different ways
* the possibility to keep everything in a vacuum till the final product release - this should not be necessary ***
(Can you think of any more?)
I don't deem any of them worthy enough though to sacrifice the nice sheet form factor that a nanofactory could have. It is clear that the bottom three convergent assembly steps (roughly: 1 mainly mechanosynthesis, 2 mainly radical surface zipping, 3 mainly shape locking) are absolutely necessary. But I'm not clear about the topmost convergent assembly stages -- they definitely do not increase the nanofactories speed so much is sure. (as reviewed above: cross sections at any hight have same throughput capacity)
*** Vacuum lockout is a special topic easily big enough to start separate thread.
late vacuum lockout: perfectly controlled environment <- one of Drexlers main points
not so late vacuum lockout: enforce recyclable shape locking micro-components (~1um?) such that we don't end up with APM being the greatest waste producer in human history (I question whether this will be avoidable). Consider this line out of the productive Nanosystems promo video: "the only waste product is clean water and warm air" ... and oops we forgot the product when it is no longer needed .... add too much silicon and you can't even burn it normally - you'd get flame inhibiting slack. (edit: Well, "Radical Abuncance" does mention recycling briefly but it treats it more like magical a black box.) Btw: I'm currently working on a simple and elegant vacuum lockout system for arbitrary shaped micro-scale-blocks -- but that's a seperate topic ...
Question b)
Why do all the "main images of convergent assembly"(*) use a low area branching factor of four (amounting to side-length branching of two). As the (in relation to nanofactories stupidly low-tech) current day 3D printers nicely demonstrate way bigger step-sizes can lead to practical production times. Let me formulate it like this: who would build an (advanced) robot to just put 8 parts together ?! Also usually stuff does not come apart in very vew equally sized parts.
Choosing a bigger step-size may be quite a bit slower than the absolute maximum possible (in case the bottom mill layers are pretty performant) but it has also two big advantages:
1) designers will have probably less to think about the production process
2) bigger steps make less steps - this is way easier to grasp for the human mind
To elaborate on point two: Suppose we choose a step-size in side-length of 32 ~= sqrt(1000) ... (instead of the common two -- 32 is still way lower than what todays 3D printers do) ... then we get from 1nm (5C atoms) to 1mm in only four steps where each step has a comprehensible size ratio.
like this: 1nm (1) 32nm (2) 1um (3) 32um (4) 1mm
(When designing in this setting it seems not so far fetched anymore to actually hit the limits and run out of space. You can actually realize for the first time there’s not infinite space at the bottom - so to say.)
Note that with bigger step-sizes the throughput balance stays perfectly in tact:
In simple math:
Q2 = 1 s^3 f
Q1 = 16 (s/4)^3 4f = 1 s^3 f
=> Q2 = Q1 :)
Main parameters:
* area branching factor = 16
* volume upward ratio = 1/(16*(1/4)^3) = 16
* scale-step = 4
Here is my supposition:
The reason why 32-fold size steps are usually not depicted is probably because you can barely see three levels of convergent assembly on a computer screen then. But there's a way around this! There is a possibility to map the layers of a nanofactory such that one can see all the details on all scales equally well. I made an info-graphic on this quite a while ago but it turns out the straight horizontal lines are actually wrong.
see here:
https://www.flickr.com/photos/65091269@N08/21136191800/in/dateposted-public/

Recently I found Joackim Böttger's work which I think is rather relevant for the visualisation of convergent assembly configurations in nanofactories:
http://www.uni-konstanz.de/grk1042/people/member/boettger.html
http://www.amazon.de/Complex-Logarithmic-Views-Warping-Joachim-B%C3%B6ttger/dp/3843901805
http://graphics.uni-konstanz.de/publikationen/2008/satellite/Boettger%20et%20al.%20--%20Detail-In-Context%20Visualization%20for%20Satellite%20Imagery.pdf
http://graphics.uni-konstanz.de/publikationen/2006/complex_logarithmic_views/Boettger%20et%20al.%20--%20Complex%20Logarithmic%20Views%20for%20Small%20Details%20in%20Large%20Contexts.pdf
I wrote a python program do kind of such a such a mapping. Here's an early result:
https://www.flickr.com/photos/65091269@N08/20702935363/in/dateposted-public/

I may try to apply it on some screen-shots of this video:
http://www.dailymotion.com/video/x4mv4t_zoom-into-hair_tech
I also have further plans with this which would be too much for here though.
Questions regarding uncommon forms of C.A.:
There are two exceptions I know of which deviate from the "main images of convergent assembly"(*):
I'll describe how I understand them below. If you spot some misunderstandings please point me to them.
exception a)
Nanosystems page 418 -- Figure 14.4.
Main parameters:
* area branching factor = 8
* volume upward ratio 1/(8x1/8) = 1
* scale-step = 2
Drexler himself writes (capitals by me):
"... This structure demonstrates that certain geometrical constraints can be met,
BUT DOES NOT REPRESENT A PROPOSED SYSTEM".
Here is how this looks like: http://www.zyvex.com/nanotech/images/DrexlerConverge.jpg

If I understand it right this is because in this arrangement the throughput capacity rises with a factor of two with every iteration downward creating a veritably massive bottleneck (30 iterations -> factor 2^30~10^9) at the top.
In simple math:
Q2 = [8s^3] f = 8 s^3 f
Q1 = 8[8(s/2)^3] 2f = 16 s^3 f
=> Q1 = 2*Q2 .... oops >.<
exception b)
The convergent assembly in Chris Phoenix's nanofactory design that he describes here:
http://www.jetpress.org/volume13/ProdModBig.jpg
I am not talking about the geometrical design decisions but the main parameters of the chosen convergent assembly. In this regard it is completely identical to Drexlers (UNproposed) configuration in Nanosystem Figure 14.4. and that on ALL! stages since it has:
lower (stratified) stages: (chosen geometry not arbitrary scalable as Chris Phoenix points out himself)
* area branching factor = 3x3 (-1 redundant normally unused) = 8
* volume downward ratio (8x1/8)/1 = 1
* scale-step = 2
upper 3D fractal stages:
* area branching factor = 8 (+1 redundant normally unused)
* volume downward ratio (64x1/8)/8 = 1
* scale-step = 2
Major error here ??
Unless I am misunderstanding something I do spot a major error in reasoning here. The reasoning goes as follows: You want the throughput capacity of the very bottom stage upped to compensate for the slowness of this single stage of general purpose "fabricators".
BUT: deducing from this that continuing this approach further up the stages helps even more is actually incorrect. Doing this is actually detrimental. The reason: All further stages are super fast since they only have to assemble eight blocks to one 2x2x2 block thus this leads to the aforementioned upward tightening funnel in throughput capacity. While the stage right after the fabricators is seriously overpowered or equivalently under-challenged at some point up the stack the load starts to fit the capacity and from there on out the funnel takes effect. In spite of this exponential funnel situation the top stage still looks totally practical - which is nothing but amazing and once again proofs the insane potential of nanofactories.
What I think actually is necessary for a more throughput to throughput-capacity matched design is much much more parallelism in the bottom layer. When you stack those fabricators and thread by the finished parts fast some similarity to mill-style crops up here - which may not be too suprising.
(Such stacking may be necessary in one or two more stages - due to e.g. slow surface radical zipping - but that should be it -- that's as I understand the reason why the lowest three convergent-assembly-layer-stacks are actually becoming thinner going upward - as can be seen in the productive nanosystems video)
Imo in Chris Phoenix's nanofactory text separating the abstract convergent assembly details from concrete geometric implementation details and other stuff could have been done better. I may go through the trouble and crop out and cite the dispersed relevant pieces if requested.
What also bothers me is that although this is supposed to be a practical design it adheres rather closely to the very small side-length doubling steps which I've tried to argue against above.
As I've seen Chris Phoenix is member of this forum.
@Chris Phoenix:
If you happen to read this post I would be delighted to hear your thoughts about this.
Please check if you see any errors in my reasoning.
If not how you would modify your design?
Fine-tuning:
There are I think two main reasons to slightly deviate from the good first approximation of constant speed on all stages I've spoken of above.
Reason a)
At the bottom:
* limit in spacial mechanosynthesis density - manipulators are necessarily bigger than atoms
* limit in temporal density - slow down to prevent excessive friction heat since the big bearing surface area compensates even the big super-lubrication benefit
(these two are rather well known)
Reason b)
I have an idea which I call "infinitesimal bearings" - See here:
http://apm.bplaced.net/w/index.php?title=Infinitesimal_bearing
This should allow us to cheat on the constant speed on all sizes rule especially in the mid sized area (0.1um .. 0.1mm).
Here's a maybe interesting observation:
To get a throughput capacity funnel that is widening in the upward direction (which will certainly be needed but has seemingly never been discussed) one needs a low area branching factor and a high volume upward ratio.
What would be the optimal geometry for this?
(____) widening -- (layered) constant-- (3D-fractal) tightening
This somewhat reminds me on space topologies: elliptical, plane, hyperbolic ....
Note any type can be forced in any geometric configuration for a few stages.
And finally to the biggest mystery of all I've encountered so far:
There's this discussion by J.Storrs Hall which I still need to chew through thoroughly.
Its about the scaling law for the replication time of proposed nanofactories.
It is actually mismatching the scaling law for replication observed in nature by an order of magnitude(?).
See here:
http://www.imm.org/publications/reports/rep041/
I think this is super relevant!
Some open questions regarding this:
easy: How would this look in a stratified configuration?
easy: How much is it "unscalable" in this configuration?
hard: How can the essence of this be visualized in a less abstract more intuitive way?
That is: Why does nature choose to do so?
....
---------------------------------------
ps: Please excuse for the size of this post but I wanted to have all the stuff of convergent assembly together to form a complete picture.
I really want to understand nanofactories and since convergent assembly is arguable one of the most important aspects of them I need to get I tight grasp on it. I already found out quite a bit and wrote it down here:
http://apm.bplaced.net/w/index.php?title=Convergent_assembly
But there still are some major things that I do not understand - partly or in full.
(*)Note: In the following when I'm going refer to "the main images of convergent assembly" I mean the four ones that can be found here:
1) http://e-drexler.com/p/04/05/0609factoryImages.html
2,3,4) http://www.zyvex.com/nanotech/convergent.html
In case you're not aware: In these examples area branching and volume ratio steps are matching such that:
[ equal throughput on all stages <=> equal operating speeds on all stages ]
This seems reasonable for a first approximation.
In simple math:
Q2 = 1 s^3 f
Q1 = 4 (s/2)^3 2f = 1 s^3 f
=> Q2 = Q1 :)
s ... side-length
f ... frequency
Q2 ... throughput upper bigger layer (reference units)
Q1 ... throughput lower smaller layer
In main parameters (assuming constant speeds):
* area branching factor = 4
* volume upward ratio 1/(4x(1/2)^3) = 2
* scale-step = 2
!! please ask for details if that doesn't make sense to you !!
General questions about C.A.:
Question a)
Why do all the "main images of convergent assembly"(*) go all the way up with the convergent assembly steps to the size of the whole nanofactory. This changes the nanofactory from a convenient sheet format to a clunky box.
You can read here ...
http://apm.bplaced.net/w/index.php?title=Convergent_assembly
... why I think that "higher convergent assembly levels quickly loose their logistic importance"
What I wrote there too (for now under the headline "Further motivations" at the bottom) are some things that came to my mind about why the convergent assembly nevertheless goes up to the top in the main images of convergent assembly(*). These are:
* more simple construction of overhangs without the need for scaffolds (stalactite like structures)
* the automated management of bigger logical assembly-groups
* the simpler decomposition into big standard parts that can be put together again in completely different ways
* the possibility to keep everything in a vacuum till the final product release - this should not be necessary ***
(Can you think of any more?)
I don't deem any of them worthy enough though to sacrifice the nice sheet form factor that a nanofactory could have. It is clear that the bottom three convergent assembly steps (roughly: 1 mainly mechanosynthesis, 2 mainly radical surface zipping, 3 mainly shape locking) are absolutely necessary. But I'm not clear about the topmost convergent assembly stages -- they definitely do not increase the nanofactories speed so much is sure. (as reviewed above: cross sections at any hight have same throughput capacity)
*** Vacuum lockout is a special topic easily big enough to start separate thread.
late vacuum lockout: perfectly controlled environment <- one of Drexlers main points
not so late vacuum lockout: enforce recyclable shape locking micro-components (~1um?) such that we don't end up with APM being the greatest waste producer in human history (I question whether this will be avoidable). Consider this line out of the productive Nanosystems promo video: "the only waste product is clean water and warm air" ... and oops we forgot the product when it is no longer needed .... add too much silicon and you can't even burn it normally - you'd get flame inhibiting slack. (edit: Well, "Radical Abuncance" does mention recycling briefly but it treats it more like magical a black box.) Btw: I'm currently working on a simple and elegant vacuum lockout system for arbitrary shaped micro-scale-blocks -- but that's a seperate topic ...
Question b)
Why do all the "main images of convergent assembly"(*) use a low area branching factor of four (amounting to side-length branching of two). As the (in relation to nanofactories stupidly low-tech) current day 3D printers nicely demonstrate way bigger step-sizes can lead to practical production times. Let me formulate it like this: who would build an (advanced) robot to just put 8 parts together ?! Also usually stuff does not come apart in very vew equally sized parts.
Choosing a bigger step-size may be quite a bit slower than the absolute maximum possible (in case the bottom mill layers are pretty performant) but it has also two big advantages:
1) designers will have probably less to think about the production process
2) bigger steps make less steps - this is way easier to grasp for the human mind
To elaborate on point two: Suppose we choose a step-size in side-length of 32 ~= sqrt(1000) ... (instead of the common two -- 32 is still way lower than what todays 3D printers do) ... then we get from 1nm (5C atoms) to 1mm in only four steps where each step has a comprehensible size ratio.
like this: 1nm (1) 32nm (2) 1um (3) 32um (4) 1mm
(When designing in this setting it seems not so far fetched anymore to actually hit the limits and run out of space. You can actually realize for the first time there’s not infinite space at the bottom - so to say.)
Note that with bigger step-sizes the throughput balance stays perfectly in tact:
In simple math:
Q2 = 1 s^3 f
Q1 = 16 (s/4)^3 4f = 1 s^3 f
=> Q2 = Q1 :)
Main parameters:
* area branching factor = 16
* volume upward ratio = 1/(16*(1/4)^3) = 16
* scale-step = 4
Here is my supposition:
The reason why 32-fold size steps are usually not depicted is probably because you can barely see three levels of convergent assembly on a computer screen then. But there's a way around this! There is a possibility to map the layers of a nanofactory such that one can see all the details on all scales equally well. I made an info-graphic on this quite a while ago but it turns out the straight horizontal lines are actually wrong.
see here:
https://www.flickr.com/photos/65091269@N08/21136191800/in/dateposted-public/

Recently I found Joackim Böttger's work which I think is rather relevant for the visualisation of convergent assembly configurations in nanofactories:
http://www.uni-konstanz.de/grk1042/people/member/boettger.html
http://www.amazon.de/Complex-Logarithmic-Views-Warping-Joachim-B%C3%B6ttger/dp/3843901805
http://graphics.uni-konstanz.de/publikationen/2008/satellite/Boettger%20et%20al.%20--%20Detail-In-Context%20Visualization%20for%20Satellite%20Imagery.pdf
http://graphics.uni-konstanz.de/publikationen/2006/complex_logarithmic_views/Boettger%20et%20al.%20--%20Complex%20Logarithmic%20Views%20for%20Small%20Details%20in%20Large%20Contexts.pdf
I wrote a python program do kind of such a such a mapping. Here's an early result:
https://www.flickr.com/photos/65091269@N08/20702935363/in/dateposted-public/

I may try to apply it on some screen-shots of this video:
http://www.dailymotion.com/video/x4mv4t_zoom-into-hair_tech
I also have further plans with this which would be too much for here though.
Questions regarding uncommon forms of C.A.:
There are two exceptions I know of which deviate from the "main images of convergent assembly"(*):
I'll describe how I understand them below. If you spot some misunderstandings please point me to them.
exception a)
Nanosystems page 418 -- Figure 14.4.
Main parameters:
* area branching factor = 8
* volume upward ratio 1/(8x1/8) = 1
* scale-step = 2
Drexler himself writes (capitals by me):
"... This structure demonstrates that certain geometrical constraints can be met,
BUT DOES NOT REPRESENT A PROPOSED SYSTEM".
Here is how this looks like: http://www.zyvex.com/nanotech/images/DrexlerConverge.jpg

If I understand it right this is because in this arrangement the throughput capacity rises with a factor of two with every iteration downward creating a veritably massive bottleneck (30 iterations -> factor 2^30~10^9) at the top.
In simple math:
Q2 = [8s^3] f = 8 s^3 f
Q1 = 8[8(s/2)^3] 2f = 16 s^3 f
=> Q1 = 2*Q2 .... oops >.<
exception b)
The convergent assembly in Chris Phoenix's nanofactory design that he describes here:
http://www.jetpress.org/volume13/ProdModBig.jpg
I am not talking about the geometrical design decisions but the main parameters of the chosen convergent assembly. In this regard it is completely identical to Drexlers (UNproposed) configuration in Nanosystem Figure 14.4. and that on ALL! stages since it has:
lower (stratified) stages: (chosen geometry not arbitrary scalable as Chris Phoenix points out himself)
* area branching factor = 3x3 (-1 redundant normally unused) = 8
* volume downward ratio (8x1/8)/1 = 1
* scale-step = 2
upper 3D fractal stages:
* area branching factor = 8 (+1 redundant normally unused)
* volume downward ratio (64x1/8)/8 = 1
* scale-step = 2
Major error here ??
Unless I am misunderstanding something I do spot a major error in reasoning here. The reasoning goes as follows: You want the throughput capacity of the very bottom stage upped to compensate for the slowness of this single stage of general purpose "fabricators".
BUT: deducing from this that continuing this approach further up the stages helps even more is actually incorrect. Doing this is actually detrimental. The reason: All further stages are super fast since they only have to assemble eight blocks to one 2x2x2 block thus this leads to the aforementioned upward tightening funnel in throughput capacity. While the stage right after the fabricators is seriously overpowered or equivalently under-challenged at some point up the stack the load starts to fit the capacity and from there on out the funnel takes effect. In spite of this exponential funnel situation the top stage still looks totally practical - which is nothing but amazing and once again proofs the insane potential of nanofactories.
What I think actually is necessary for a more throughput to throughput-capacity matched design is much much more parallelism in the bottom layer. When you stack those fabricators and thread by the finished parts fast some similarity to mill-style crops up here - which may not be too suprising.
(Such stacking may be necessary in one or two more stages - due to e.g. slow surface radical zipping - but that should be it -- that's as I understand the reason why the lowest three convergent-assembly-layer-stacks are actually becoming thinner going upward - as can be seen in the productive nanosystems video)
Imo in Chris Phoenix's nanofactory text separating the abstract convergent assembly details from concrete geometric implementation details and other stuff could have been done better. I may go through the trouble and crop out and cite the dispersed relevant pieces if requested.
What also bothers me is that although this is supposed to be a practical design it adheres rather closely to the very small side-length doubling steps which I've tried to argue against above.
As I've seen Chris Phoenix is member of this forum.
@Chris Phoenix:
If you happen to read this post I would be delighted to hear your thoughts about this.
Please check if you see any errors in my reasoning.
If not how you would modify your design?
Fine-tuning:
There are I think two main reasons to slightly deviate from the good first approximation of constant speed on all stages I've spoken of above.
Reason a)
At the bottom:
* limit in spacial mechanosynthesis density - manipulators are necessarily bigger than atoms
* limit in temporal density - slow down to prevent excessive friction heat since the big bearing surface area compensates even the big super-lubrication benefit
(these two are rather well known)
Reason b)
I have an idea which I call "infinitesimal bearings" - See here:
http://apm.bplaced.net/w/index.php?title=Infinitesimal_bearing
This should allow us to cheat on the constant speed on all sizes rule especially in the mid sized area (0.1um .. 0.1mm).
Here's a maybe interesting observation:
To get a throughput capacity funnel that is widening in the upward direction (which will certainly be needed but has seemingly never been discussed) one needs a low area branching factor and a high volume upward ratio.
What would be the optimal geometry for this?
(____) widening -- (layered) constant-- (3D-fractal) tightening
This somewhat reminds me on space topologies: elliptical, plane, hyperbolic ....
Note any type can be forced in any geometric configuration for a few stages.
And finally to the biggest mystery of all I've encountered so far:
There's this discussion by J.Storrs Hall which I still need to chew through thoroughly.
Its about the scaling law for the replication time of proposed nanofactories.
It is actually mismatching the scaling law for replication observed in nature by an order of magnitude(?).
See here:
http://www.imm.org/publications/reports/rep041/
I think this is super relevant!
Some open questions regarding this:
easy: How would this look in a stratified configuration?
easy: How much is it "unscalable" in this configuration?
hard: How can the essence of this be visualized in a less abstract more intuitive way?
That is: Why does nature choose to do so?
....
---------------------------------------
ps: Please excuse for the size of this post but I wanted to have all the stuff of convergent assembly together to form a complete picture.