Internet Domain Registry and Site Metadata – Opportunity and Issues

Registered Domains contain specific details that identify the site and its ownership, which are published and used by multiple organizations and search engines.  In order to publish an Internet Site, a domain must be registered with a Domain Service Provider.  These domains are typically registered with organizations such as ICANN and WHOIS, which are database registry services responsible for the management of numbers, names, and associated ownership contact information.  The Internet Corporation for Assigned Names and Numbers is an American multi-stakeholder group and nonprofit organization responsible for coordinating the maintenance and procedures of several databases related to the namespaces and numerical spaces of the Internet, ensuring the network’s stable and secure operation (ICANN, 2020).  These registry services do not capture site summaries of content but do offer privacy options to restrict personal information from public access.  The responsibility for site summary content falls on the site programmer, which must be written in adherence to search engine standards for the site to correctly appear in search results.  How the site is written, along with other site statistics is what determines its ranking and visibility in keyword search results.  Standard writing strategies are based on keywords or a metadata strategy that summarize the site’s contents. 

Historically, programmers wrote their site content using non-standard writing rules until Search Engines set specific rules and algorithms to reduce what they call ‘keyword stuffing’ or using non-relevant summaries to describe content (Wikipedia, 2020).  This type of publishing strategy is not globally or geographically organized by its publishers, forcing writers to make location information known.  These are freeform text, written in programming languages and are stored in registry databases but don’t often match the location of the site. If a corporation publishes a site with multiple locations, the programmer is responsible for writing pages or listings for every geography, forcing additional duplicative work.   Some advancements have been made to solve this problem, such as Directory Listings, or keywords, but this non-standard, varied strategy is text based and should be stored in a database for better management and duplication reduction.  While programmers are required to write using certain languages and follow a standard registration policy, much information, such as site summaries, verifications, and standard protocols for search engine management, makes it a competitive, unorganized, and varied system with heavy reliance on Search Engines responsible for the Internet’s output.  A company serving 45 counties is forced to write its sites individually to appear in 45 or more places in a search engine, accessible using browser location verifications.  Obviously, it’s not the most efficient way to create a multi-location site that operates in several physical buildings across the world or even in one place virtually.  The Internet’s current design is like a ‘notepad’ style of data management for location and physical information with advanced programming features for images and animation. 

With the implementation of database technologies and protocols, the Internet can become a more organized and better managed place, making it easier to search, evaluate, and manage information.  Businesses and publishers are required to publish additional listings on Google Places in order to be found on a map or directory.  This is a simple task that can be accomplished by working with Addressing and Licensing Authorities alongside Internet Service Providers.  While maps and Google Places offers exceptional technology for location and advertising services, the fact that it is optional, stifles critical Global Information Systems (GIS) and Directory Information System projects, and limits National Alert System and data gathering tasks.  Business Analytic Companies, such as Hoovers, now Dun & Bradstreet provided business listing services based on research of companies across the world and was able to provide useful information for management, marketing, advertising, and other purposes.  This company is sort of a Search Engine Competitor but provides more in-depth research and targeted data gathering, positioning Search Engines with more specific query and data management capability with a few changes in Internet Publishing Standards.  If the Internet was designed more like the Book industry, an ISBN, publisher, with copyright information and registry with a specific authority and distribution plan with a clearinghouse for published works, it would be easier to manage, assess, access, and control.

By Sheri L. Wilson

Author, PhD Student; Doctor of Technology, Research