| 
 
 
 | 
  
  
  
  
  Dean's 
   Digital World 
    
   Working with the Wonderful WEB 
   By Dean Tudor 
    
  Things happen so fast out in cyberspace that the world of print 
   cannot even keep up with the changes, especially if the publication 
   is like Sources, with its twice-a-year delivery. At 
   least with a newspaper there is a chance of adding relevant, pertinent 
   material when needed. 
    
   Last time out, I was enthusiastic about GOPHER, a text-based program 
   that went out and fetched materials on the Internet and dropped 
   them into your electronic account. GOPHER, though, is not exciting 
   in this aberrant world: it is menu-driven and there is constant 
   scrolling through hierarchies. This could be avoided on return visits 
   by setting up a bookmark system. However, just merely exploring 
   - "surfing" as the netters say - is kind of boring, since 
   you have to plow initially through the menus. And, there are no 
   images or sounds. 
    
   Nevertheless, as a fetcher or "go fer", GOPHER worked. 
   It brought mainly-ASCII text to my account, and I printed off the 
   file or shot it around to my friends for digital storage. I still 
   like GOPHER. I still use it. But, only a few other people do. They've 
   now moved on to hypertext, and the WEB. 
    
   The WEB is the fastest-growing application out on the Internet. 
   It can do the same thing as GOPHER - and more. It can deliver text, 
   images and sound to your account. It can go directly to the source, 
   without you having to menu your way through. It can go directly 
   to a connection through the world of hypertext. In a word, the WEB 
   is A-M-A-Z-I-N-G. Sites are continually opening up, and sites are 
   continually changing. It is de rigeur to date the information on 
   the WEB, so that users know when the latest information was added. 
   Each month's percentage increase of new sites appears to be 200 
   over the previous month's. The number of new domains is also increasing; 
   most of these are for the creation of new WEB sites. For example, 
   the latest figures I have (March 1995) show that 37 new domains 
   are being registered every working hour. 
    To use the WEB, you need a surfing program. Free ones are available 
   from Internet providers. You can use Lynx, which is just text-based 
   (I use Lynx not only to get to the WEB but also to GOPHER), or you 
   can use Mosaic, which is a GUI program. You can spend some money 
   and get Netscape, which is a commercial version of Mosaic with more 
   bells and whistles. The only drawback to the GUI programs, which 
   allow transmital of video and audio components, is the extremely 
   long time it takes to download the non-text data. If you have an 
   account at a university, time will drag by slowly but you will eventually 
   get the video and audio to play around with. If you connect by home 
   phone through a modem, it can take what seems like forever just 
   to receive images. And, you have to have a lot of hard-drive space. 
   The situation will not be resolved for home users until fast speed 
   phone lines are installed. And, when that happens, you should be 
   able to watch a television program on your computer! Is it any wonder 
   that the cable companies want a piece of the action? They want to 
   be able to send you the Internet data to your TV set. And, the phone 
   companies want to send TV shows to your computer. Aberrant behaviour... 
    
   Once you are online with the WEB surfer, you need to build up a 
   series of addresses. These are called URLs (Uniform Resource Locators). 
   Each place you visit has to have a statement like: 
    
   http://www.acs.ryerson.ca/~journal/megasources.html 
    
   (http stands for "hypertext transfer protocol": this tells 
   Lynx or Mosaic to fetch the data; also, URLs are one word, with 
   no spaces.) 
    
   So, this URL is for the WEB pages I constructed for the School of 
   Journalism at Ryerson Polytechnic University. If you were to "visit" 
   it by "pointing" your "surfer," you would find 
   about 10 computer screens-worth of "gateways" to other 
   sources out on the Internet. Some of these sources will be GOPHERS, 
   and others will be file directories (FTPs). But most of them will 
   be WEB sites. By the way, your surfer will also be able to handle 
   TELNET, USENET, and E-MAIL, making it a "one-program-fits-all" 
   application... 
    
   So, how do you find stuff out there? The best and easiest way is 
   simply through the built-in index that comes with the application 
   program. For example, in Lynx I push the "i" button (i 
   stands for index) and I am immediately connected with the MetaIndex 
   base at NSCA (the developers of Mosaic). This index gives a choice 
   of highlighted names. By pushing the Enter button at each highlighted 
   name, I can get to that particular WEB site. And, each site has 
   more linked sites as gateways. You could spend days just exploring 
   what you find through the "i" button. 
    
   But,if you want to be more organized, then you'll need some sophisticated 
   search engines, that will scour the WEB world on a subject or name 
   basis. There is one called the WWW Worm which answers more than 
   2 million enquiries every month (http://www.cs.colorado .edu/home/mcbryan/WWWW.html). 
   There is LYCOS, which can index over 3.6 million documents by URLs, 
   with abstracts from 23,550 WEB servers (http://lycos.cs.cmu.edu). 
   There is YAHOO, which has about 50,000 sites listed (http://www.yahoo.com). 
   Others, which you can find on my WEB page at Ryerson, include the 
   Global Network Navigator, the EINet Galaxy (which can also search 
   GOPHERS and HYTELNET), CUSI (Configurable Unified Search Interface), 
   CUI WWW Catalog, NIKOS, WWW Wanderer Index, WWW Virtual Library, 
   Harvest Broker (which is an index to personal home pages: these 
   usually contain "weird" and "fun" stuff), Internet 
   Index from SilverPlatter, the WebCrawler, and the Netscape Searcher. 
   This latter comes from the Netscape program itself, and it should 
   be the easiest to use if you have Netscape. 
    
   All of these will search and index sites for you. Usually, there 
   is a space for you to type in a request. Most use Boolean logic 
   (AND, OR, NOT) and some use single words; others use strict subject 
   headings. You never know what you are likely to find, so it is wise 
   to go to three different search engine sites - just to cover yourself. 
   If you have plenty of time, you might want to try URoulette, which 
   is a site that will put you into a random URL somewhere on the planet. 
   You never know what'll turn up! 
    
   Sometimes, you might just want to know in what regions the sites 
   are, which companies have sites, and their URLs. In that case, try 
   the Comprehensive List of Sites (http://www.netgen .com/cgi/comprehensive) 
   or the WWW Consortium Site List (http://www.w3.org/ hypertext/DataSources/WWW/Servers.html) 
   and search by company or possible site name or geography. For Canada, 
   there is the Canadian World Wide Web Servers (http://www.csr.ists.ca/w3can/Welcome.html) 
   and, for Europe, there is Best of the British Web Sites or EUROPA 
   (web server of the European Commission). Both are on my WEB page. 
    
   If you want direct links to subject areas, you might want to try 
   the pages put together by MegaLinks (http://www.eskimo.com/~future/megalinks.htm), 
   or the Web of Wonder, or Sleuth Resources, or Pointers to Pointers. 
   One of the first list of special connections was Scott Yanoff's 
   (http://info.cern.ch/hypertext/DataSources/Yanoff.html). 
   Others that are used quite regularly include Doug's Hotlist (http://dsys.ncsl.nist.gov:80/~dwhite 
   /drw_link.html) and Urb's Hot Spots (http://www.charm.net/~lejeune/urb-menu.html). 
   All of these are on my WEB pages. 
    
   Okay, so now you know where to search, who's got all the links, 
   and where the various sites are. Now, we need to know (shudder, 
   shudder): what's new? There is a site called What's New, and it 
   is at Mosaic (http://www.ncsa.uiuc.edu/SDG/Software/Mosaic/Docs/whats-new.html). 
   It should be your first port of call after you get a handle on the 
   WEB. Incredibly, it is put together three times a week: Monday, 
   Wednesday, and Friday. Each time, it has about 75K of data to send 
   you, and it takes at least 10 minutes to review it - more if you 
   want to try out the new URLs. In addition to highlighted addresses, 
   it also has the URLs that you can capture to your bookmark. So, 
   with a minimum of effort, you can get to the new stuff. There's 
   just one drawback: you've got to go and get the file three times 
   a week. If you get lazy and forget to go out for the stuff, then 
   you'll end up skipping some days. The files are pulled down and 
   shipped over to the What's New archives. This means additional searching. 
   It's similar at Web News Service, another supplier of new listings. 
   It is a lot more convenient to have announcements sent to your box 
   as e-mail, that way you have to deal with it quickly. You can subscribe 
   to the weekly Netsurfer Digest, which will have new stuff, or the 
   Scout Report. Both can serve nicely as updates. But What's New is 
   more comprehensive. You can also read Usenet news (comp.internet.net-happenings, 
   or comp.infosystems.www.announce). I don't want to load up this 
   article with cryptic URLs, so e-mail me (dtudor@acs.ryerson.ca) 
   for specifics on how to subscribe. Like everything on the Internet, 
   all this is free. 
    
   I used What's New to find WEB sites dealing with the Oklahoma City 
   bombing. I could have searched through the WEB indexes, but I wanted 
   immediate sites. Actually, I found more than I could handle through 
   just my e-mail subscriptions to journalism discussion groups. Within 
   minutes, items were posted. At the same time, URLs were posted for 
   militia and white supremacist sites. Yahoo and Lycos were useful 
   for searching for background (e.g., what had been said about militia 
   groups over the past six months). A search of Lycos under "white 
   power" showed a link to Stormfront, a white supremacist outfit 
   in Florida, which had links and phone numbers for other areas and 
   militia groups. That's the beauty of hypertext searching: find one 
   link and it will have gateways to almost all the others. If you 
   search the WWW Consortium on a geographic basis, then you can find 
   all the servers in Oklahoma City. You can just start pushing buttons, 
   or go to the University of Oklahoma server and check out their "Other 
   Servers in Oklahoma," and - bingo: entire listings related 
   to the bombing. This gets you information, but not more current 
   than the last 30 minutes or so because somebody still has to input 
   data. But, you can get the essential background data and images, 
   and then download the files to play around with at some later point. 
   You can discover the Usenet groups that deal with militia and begin 
   searching with words such as "white," "guns," 
   "militia," "supremacy." Some of the WEB sites 
   that were useful: 
  http://www.accesscom.net/stormfront/ 
   http://wwwvms.utexas.edu/~AXL/index.html 
   ftp://ftp.netcom.com/pub/NA/NA/ 
   http://www.cpb.uokhsc.edu/okwww.html 
  And, some of the discussion groups: 
  alt.politics.nationalism.white 
   alt.politics.white-power 
   alt.revolution.counter 
   alr.revisionism 
   alt.conspiracy 
   misc.activism.militia 
  With the same caveats about any sources, the Internet is a wonderful 
   place to do research, especially if you work at home and can phone 
   out all-day long! You can do research (locate background material 
   and source documents), reference (look up facts, statistics, names, 
   dates), and rendezvous (find out where the experts gather, find 
   the people with experience). 
    
   Does this give new meaning to the 3 Rs? Research, Reference, and 
   Rendezvous...I wish I had said that, but I got it from Nora Paul, 
   who runs the Research Library at the Poynter Institute in St.Petersburg, 
   Florida (npaul@poynter.org). Every week or so, she and her staff 
   come up with sources for the top US news story: a combination of 
   book facts, periodicals, internet sources, people, usenet groups, 
   etc., etc. You can get into it at: 
    
   http://www.nando.net/prof/poynter/hrintro.html 
    
   A typical recent story dealt with natural "disasters" 
   (floods, earthquakes). Nora posted terrific stuff from the Federal 
   Emergency Management Agency's website (texts, speeches, press releases, 
   images), the Red Cross website, the Emergency Preparedness Information 
   eXchange (EPIX) at Simon Fraser University's website, plus scores 
   of agency listings and groups which provide disaster relief. She 
   even had a GOPHER that provided information on more than 30 members 
   of the National Voluntary Agencies Active in Disaster. Then, she 
   had a subject breakdown, for specific sources on hurricanes, cyclones, 
   earthquakes, floods, and volcanoes. 
    
   The Internet, of course, is American first, since that is where 
   most of the audience and information providers come from. But, there 
   is certainly a Canadian presence. Perhaps, we are second in usage, 
   followed by Britain. Certainly, it is an English-language communication 
   device. 
    
  Useful Resources about the Internet for Journalists 
   
  Try to buy the following books: 
    
   The Online Journalist: using the Internet and other electronic 
   resources, by Randy Reddick and Elliot King (Harcourt Brace, 1995, 
   251p.) ISBN 0-15-502018-8.  
   It costs $30 CDN, but it is worth it, since it is the first book 
   pitched to journalists and written by journalists. Of course, it 
   is non-current, since the Internet changes every day and this book 
   was finished off in September, 1994. But, unless you use the Internet 
   every day, you don't really know that... 
    
    Canadian Internet Handbook, 1995 edition, by Jim Carroll and 
   Rick Broadhead (Prentice Hall Canada, 1994, 798p.) ISBN 0-13-329350-5. 
    
   It costs $21.95, but it includes $20 in discount coupons for Internet 
   providers. There is a lot of wasted space in this book, and, of 
   course, it was put together last July, so it is even more non-current. 
   But, it does give an explanation of the Internet. You don't need 
   to buy any other generic book. You may also want to hold off until 
   the 1996 edition comes out, then buy it right away. 
    
   Every Student's Guide to the Internet, by Keiko Pitter and four 
   others (McGraw-Hill, 1995, 183p.) ISBN 0-07-051773-8, $22.95 CDN. 
    
   It was put together by five Internet instructors at an American 
   university. It is laden with useful hands-on tutorials and exercises. 
   Also good for its clarity and tips for doing "academic" 
   research, which neither of the other two books handle. 
   
  There are two other "books" you will run into, but these 
   are e-texts and they are freely available. You can surf over to 
   the Electronic Frontier Foundation (via my website) to get a free 
   copy of Adam Gaffin's Everybody's Guide to the Internet. 
   It is about 500K in size (a normal 300-page textbook with no illustrations 
   is about 625K). And, you can surf over to Norway and pick up Odd 
   de Presno's The Online World. This is about 1 MEG, and it 
   is e-text shareware (that is, he'd like some money if you find it 
   useful; for that you'll also get his newsletter and generous updates, 
   etc.). de Presno's book is updated every TWO months. You cannot 
   beat that. Gaffin's book is updated occasionally. Certainly, both 
   books are far more current than any print offering. 
    
   Also useful for documents are the Poynter Web site and the InfoPro 
   gopher site (gopher://gopher.oss.net). The InfoPro site serves 
   information brokers, information retrieval and investigative professionals. 
   Among its resources are the Japanese Business Intelligence and Credit 
   Information Resources, the National Voter Registration Database 
   Contact Information, Securities and Exchange Commission Electronic 
   Records (EDGAR), direct dial numbers for US court legal information 
   and their BBSes, and an information broker's handbook. Quite a useful 
   site for researchers. 
    
   
  Dean Tudor is Sources Informatics Consultant and a professor 
   of Journalism and Information Science at Ryerson University. He 
   can be reached at dtudor@acs.ryerson.ca. 
  Published in Sources, 
   Number 36, Summer 1995 . 
   
  See:  Other 
   Dean's Digital World Articles 
    
   www.deantudor.com 
  
   
   
   
   
Sources 
      sources@sources.ca 
         Tel:  
Copyright © Sources, All rights reserved.
  
 |