David,
Sorry, I was traveling the last three days.
At 04:50 19/10/2013, David
Conrad wrote:
On Oct 19, 2013, at 2:00 AM, JFC Morfin <[log in to unmask]>
wrote:
> 1. when we designed it,
"We"?
Those who chose the labels of the INTLFILE (now the “root”): Bob Trehin,
Joe Rinde, Mike Rude, Cees De Witt, Jim McDonald, Neil Sullivan, and me
(Oct. 1978/end 1980) plus Vida Stafford who maintained the validations
for the international network operations. Vint Cerf added others in the
ICANN/NTIA root in 2000+
> the root was not to be
managed that way. When Postel and Mokapetris used it to specify and
design the DNS they known that its was an heterarchy inherited from the
real world (the then existing data neworks and monopolies) and not a
hierarchy as was their local initial NIC.
Actually, exactly the opposite. From the introduction of RFC 819:
"The intent is that the Internet names be used to form a
tree-structured administrative dependent, rather than a
strictly
topology dependent,
hierarchy."
You refer yourself to the DNS and the Internet root-file, and not about
the dot-root architectonics (i.e. the “.”). The dot-root is a reality,
while the root-files are a concept. This is thirty years of confusion
with thirty five years of simplicity. Jon Postel's description of our
1984 consensus is in RF 921. It is incomplete but correct. However, the
ARPA 1972 architectonic model bug (cf. at the end) eventually led, from
1984 on, to the current misunderstanding in spite of the internet DNS
open design. T. Berner-Lee RFC 3986 (on URI) has unwillingly helped the
confusion as people keep assimilating DNS and URI.
Until that time, Internet names
were indeed dependent upon "the then existing data networks"
("strictly topology dependent" hierarchy). What became the DNS
broke from that approach to create a "tree-structure administrative
dependent" with a "universal reference point" (described
in section 3 of RFC 819).
> This why they designed the DNS to support 35,635 roots.
Err, what?The DNS "supports" an infinite number of roots (and,
in fact, anyone who implements "split DNS" implements their own
root), however people have found that having a single consistent
namespace is the most useful for interoperability across administrative
domains. No idea where you came up with
35,635.
A splitDNS is not the DNS.
One of the typical architectonic problems of the statUS-quo followers is
to confuse what they have told one another for 40 years (here: the
"IN" "ICANN/NTIA" only class) with what they wrote
(RFCs) and developed (Bind, etc.). At the end of the day, you genuinely
believe that the former is reality and the later a bad dream.
Today, the architectonical (network architecture, use, government,
economic, etc.) situation seems clear and able to be described (but not
easy to solve). The main possibilities seem to be:
1. the technological status-quo is
maintained:
- as the current statUS-quo:
Montevideo Statement, Keith Alexander takes R&D market lead through
the military budget - CS lip service has little influence.
- as a multilateral status-quo
(ITU, Dubai signatories) - two main economic influence zones - no one
happy with it - degradation risks - CS punctual voice has no
influence.
- as a conventional status-quo:
through enhanced cooperation specialized open conventions - CS may act as
an institutional moderator. NB: then multistakeholderism is
multilaterality extended to multilateral organizations and private
operators. The WSIS has extended it to the Civil Society but this one has
until now failed in contributing on the use and users’ behalf.
2. a new technological environment emerges. Up to
now, this was a per force school hypothesis due to a lack of public
awareness of the true situation. And an increasing risk of a
"Mafiosi" take-over or network balkanization as the technical
development is very limited (but beyond my own single time availability).
Snowden has helped already a lot, but we still blame those who protect us
rather than blaming our own vulnerability and stupidity (as people we
have never depended more on technology and be less interested in the way
it works and in its limitations and risks). The required “OpenAccess”
response to statUS-quo “OpenStand” could only emerge from CS.
This means that the Internet is a bomb in our common hands as Eugene
Kasperski warns us (cf. Ronald Deibert’s paper).
1. First possibility: you guys (US StakeHolders)
are confident that it is inert because you turned it off. However,
you are the first to call for an avolution of its mechanisms, in truly
substantial ways.
2. Second possibility; we know but you do not
know that the switcher is obsolete, so we would like you to stay still,
but each one wants to help keep still in such a way that if it explodes
it does not harm them too much. In the process everyone more or less rock
the system.
3. Third possibility: we give the system a full
restoration. However, there is a risk during the transition from the
big-data statUS-quo to an RFC enabled pig-data (protected/polluted by
information garbage-data) internet.
The debate we had with Vint Cerf, John Klensin, etc. that led to Paul
Hoffman and Pete Resnick RFC 5895 on IDNA showed that restoring natural
catarchy (mutual auto-catalysis, the way nature works) instead of (US or
any multistakeholder) constrained hierarchy is not only possible, but
is:
- the inner internet
architectonical nature (what might seem normal in a datagram based
environment)
- the least cost/code
development path.
The adapted vision of distributed security that Ronald Deibert calls for
would probably emerge from what could be called "tensecugrity"
(an horrendous portmanteau word, would you think of something better
:-)?) - to follow Buckminster Fuller's tensegrity concept. What
tensegrity computes in terms of constraints would be computed in terms of
dynamic security in tensecugrity. It could possibly be implemented via
capabilities? . This seems something equivalent to the use of the concept
of entropy by Shannon in his information/communication theory. (NB. for
those in intersem [semiotic internet] R&D: may-be this could be
directly extended to intellition for secure agoric diktyologies
syllodata?.).
Now, everyone knows that the bomb's switcher is obsolete, that we should
not rock it too much, and that any time some investor, net-open-code
team, or country has the capacity to redirect our future. This
should changes "everyone"'s priorities. IMHO, these
priorities lead to (in using the currently deployed infrastructure,
protocols, and operations as much as possible) finding an architectonic
way:
1. to correct the 1972 bug (no presentation layer).
2. to clarify the 1984+ misunderstanding (centralization of the root and
IANA).
3. to use Dr. Lessig's adage and RFC 1958 architectural rules to
neutralize the risks born from the transition (split between transport
consolidation and use development)
4. before the whole thing collapses.
I think it is doable. But we some people will have to speak the truth.
Otherwise I am afraid it will be achieved through self-organized
criticality at a global scale.
jfc