Evan,

the question is simple. From your FLOSS experience do you think that one can gather and motivate a FLOSS team to discuss, develop and test the transition towards the Internet+? And from there infere the Internet+ (inter)Governance (multi-)consensus.

The Internet+ (of which the first testing is Google+) means plainly *reading* the IETF RFCs and use of the existing binaries capacities within the convergence of the IETF layers with the other technologies and the possibilities of the whole digital ecosystem. As an example, the Internet name space is only a part of the whole digital naming by all the technologies, networks, services, OS's, applications and people. Giving control to ICANN for a tiny part of it (the "ICANN/NTIA" CLASS "IN") is no problem, giving it for the whole is the problem.

Conceptually for civil society activists the Internet+ is considering (as per the WSIS) a person centric networking as the natural evolution of the Host User centric (current technology) and Content centric (one of the current R&D area for new research and commercial TLDs and naming economy) Networking. Technically it is supported by an Intelligent Use (IUse) (computer assisted) attitude in the way to utilize the unchanged Internet system. Architecturally it is the application of the principle of subsidiarity to diversity through the encapsulation of the user within a smart environnement that the Internet understands, as per RFCs 1958, as located at its own fringe and that is to be documented as its IUI (Intelligent Use Interface) with its users. This is nothing more than an interlligent middleware for.

The consensual validation of this architectural approach was obtained at the WG/IDNAbis, chaired by Vint Cerf, led by John Klensin and Patrik Falström where we (*) opposed the network centric choices until Paul Hoffman and Pete Resnick analysed the IDNA2008 proposition from a user's point of view and proposed the RFC 5895. In doing so they permitted to ballance the reiforcement of the Internet DNS (what brings network stability) while acknowledging the empowerment of the end-user we reclaimed to adequately support the particulars of each language+script (orthotypography, i.e. language's scripting syntax). This stabilized the host and the network side (this is why Vint provides now a rock solid basis for and through Google with Public DNS (a back-up for the ICANN possible mess) and experiments the Internet+ at the host side with Google+ - however with the Google's Risk that is most probably worse than the ICANN Risk).

(*) the "we" I refer to is known as the JEDI's or Jefsey's followers as dubbed at the WG/IDNAbis. Our main blocking point was the support of the French majuscules (usually indicated by upper-cases) as an exemple of what is impossible in IDNA2008 but is semantically necessary. We support the Users' right to a polycratic and developping Internet (i.e. a ballanced Internet on a processor equal basis. We steers towards a computer assisted interbrain facilitation to complexity as we observe many efforts in that direction in areas such as language support, semantic R&D, cerebric research, multilinguistics [cybernetics of the linguistic diversity], agoric multi-agents approaches, etc.).

The first problem of this IUse area was to know where it was advisable to get it documented as it spans the limits of the IETF area and of other SDOs. The solution that came out on the IETF side is the IUCG (Internet/IETF users contributing group) with the non-WG permanent [log in to unmask] mailing list to discuss this evolution and liaise with the currently prepared IUTF (Intelligent Use Task Force).

As the facilitator of this IETF mailing list, I created the http://iucg.org/wiki, and prepared the http://iutf.org site and its http://iutf.org/wiki. (The refit of my own 1978 created structure is under way (http://intlnet.org) at the same speed as your own http://telly.org blog :-)).

My next problem was to find the most elegant and powerful architecture and software tools to explore, discuss, develop, deploy and most of all test the IUI concept without any fast commercial or mafiosi investor taking it over (easy as it is the value extended strict open minded respect and use of existing RFCs and codes). I think I will have it in the next version of my ftp://ftp.ietf.org/internet-drafts/draft-iucg-internet-plus-10.txt and the IUTF documented in a more reviewed version of http://tools.ietf.org/html/draft-iucg-iutf-tasks. The next two drafts are to document a semantic digital naming syntax (SDNS) for a consistent naming and number resolution throughout the whole digital convergence and an Intertest charter to make sure that experimentation and testing do not interferes with other services and research in using the Internet as its own test-bed.

The ultimate issue will be the transition of the Governance from Internet to Internet+. I agree with Avri to still give a chance to the IGF, actually to use it in increasing its audience with a multi-consensus documentation wiki the page of which could become the de facto IGF-RFCs we long awaited. I have reserved the http://Wikigf.org to that end.




At 20:54 12/07/2012, Evan Leibovitch wrote:
On 12 July 2012 00:14, Avri Doria <[log in to unmask]> wrote:
Hi,
So another candidate reason to add to the list of:
- Pathetic outreach
- underdeveloped Ry/Rr capabilities
is
- ridiculous pricing


Careful. After all, we worked a lot on JAS, and it was designed to address that flaw which had been anticipated long in advance.
Yet only three bodies applied for JAS, and they all had insider connections.
So if people were aware of the TLD expansion, and even a cursory search would indicate the existence of the JAS program (and other forms of assistance offered to qualified applicants), the issue is far more complex than "ridiculous pricing"

As I have argued (as is apparently now referred to as "the Evan question" :-) ), owning a gTLD presents little value over beyond speculation, vanity and luxury. As a luxury item, one can easily do without it and still maintain significant Internet presence (as so many already do). So as Milton said, anyone doing a hard analysis would find that in most cases a gTLD just isn't worth not only the ICANN fees, but also the substantial human and logistical expense of regulatory compliance, marketing, WHOIS maintenance, reseller/registrar relations, government meddling, trademark industry meddling, etc. Not to mention having enough of a reserve to help your registrants -- who depend upon your domains -- in case you have your own sustainability problems.

In other words, for the VAST bulk of the world's organizations (let alone people) owning a new gTLD just isn't worth the multiple costs and challenges -- not just within the developing world, but also in much of the developed part, too. How many applications are there that don't come from: So before just talking about ridiculous pricing, let's ask those who shied away what amount of cost would have been "non-ridiculous". Even without the $185K, back-end registry operational challenges are substantial and are not all solved in software.

I have more than a passing interest in open source and development, amongst other things having keynoted at the very first IDLELO conference, led a 22-person open source delegation at WSIS1/ICT4D and was involved in the formation of FOSSFA. And I would be the first to suggest that while just about everything needed software-wise to run a registry could be done in open source (if it hasn't already), software cost/licensing is such a trivial part of the expense of running a registry that FOSS issues (and expertise) are essentially irrelevant here.
Â
I think collecting reasons and then doing a bit of scientific falsification and relation testing work might help us in terms of figuring out how to fix this mess.


Assuming it's fixable, at least in this round. It's probably another decade at least before round 2 unfolds.

- Evan