Jumat, 27 April 2007

Communication Networks

By Sharam Hekmat
This book is concerned with post-computer communication networks and two of its important streams: data communication and telecommunication. Data communication refers to the communication between digital computers, facilitated by computer networks. Telecommunication refers to the primarily human-to-human communication facilitated by the global telephone system. The differences between these two streams are mainly due to historical reasons. Telecommunication is increasingly relying on digital computer technology, and data communication is relying more than ever on telecommunication networks. The two streams are rapidly converging.
Newcomers to this field are often bewildered by the substantial wealth of information already published on the subject. This book is aimed at this group of people. It provides a broad coverage of the key concepts, techniques, and terminology, so as to prepare readers for more advanced discussions. In-depth discussions of technically-involved topics are intentionally avoided in favor of more general concepts. No previous knowledge of networks or programming is assumed.
The structure of the book is as follows. Chapter 1 introduces computer networks and explains some of their elementary concepts. It also introduces the OSI reference model, upon which later chapters are based. Each of Chapters 2-8 describes one of the seven layers of the OSI model in the context of wide area data networks. Chapter 9 looks at local area networks and their applications. Chapter 10 provides an introduction to telecommunication. Chapter 11 builds on earlier chapters by examining ISDN as the merging point of data and voice networks. Chapter 12 looks at the ATM technology and the potential applications that it can support.

Introduction to Data Communications

by Eugene Blanchard
Data Communications is the transfer of data or information between a source and a receiver. The source transmits the data and the receiver receives it. The actual generation of the information is not part of Data Communications nor is the resulting action of the information at the receiver. Data Communication is interested in the transfer of data, the method of transfer and the preservation of the data during the transfer process.
In Local Area Networks, we are interested in "connectivity", connecting computers together to share resources. Even though the computers can have different disk operating systems, languages, cabling and locations, they still can communicate to one another and share resources.
The purpose of Data Communications is to provide the rules and regulations that allow computers with different disk operating systems, languages, cabling and locations to share resources. The rules and regulations are called protocols and standards in Data Communications.....
What is a Network? A network can consist of two computers connected together on a desk or it can consist of many Local Area Networks (LANs) connected together to form a Wide Area Network (WAN) across a continent.
The key is that 2 or more computers are connected together by a communication medium and they are sharing resources. The resources can be files, printers, hard-drives or cpu number crunching power.....

Sabtu, 24 Maret 2007

Building Internet Firewalls Second Edition

By Elizabeth D. Zwicky, Simon Cooper and D. Brent Chapman
Part I, "Network Security", explores the problem of Internet security and focuses on firewalls as part of an effective strategy to address that problem.
  • Chapter 1, "Why Internet Firewalls?", introduces the major risks associated with using the Internet today; discusses what to protect, and what to protect against; discusses various security models; and introduces firewalls in the context of what they can and can't do for your site's security.
  • Chapter 2, "Internet Services", outlines the services users want and need from the Internet, and summarizes the security problems posed by those services.
  • Chapter 3, "Security Strategies", outlines the basic security principles an organization needs to understand before it adopts a security policy and invests in specific security mechanisms.

Part II, "Building Firewalls", describes how to build firewalls.

  • Chapter 4, "Packets and Protocols ", describes the basic network concepts firewalls work with.
  • Chapter 5, "Firewall Technologies", explains the terms and technologies used in building firewalls.
  • Chapter 6, "Firewall Architectures", describes the major architectures used in constructing firewalls, and the situations they are best suited to.
  • Chapter 7, "Firewall Design", presents the process of designing a firewall.
  • Chapter 8, "Packet Filtering" describes how packet filtering systems work, and discusses what you can and can't accomplish with them in building a firewall.
  • Chapter 9, "Proxy Systems", describes how proxy clients and servers work, and how to use these systems in building a firewall.
  • Chapter 10, "Bastion Hosts", presents a general overview of the process of designing and building the bastion hosts used in many firewall configurations.
  • Chapter 11, "Unix and Linux Bastion Hosts", presents the details of designing and building a Unix or Linux bastion host.
  • Chapter 12, "Windows NT and Windows 2000 Bastion Hosts ", presents the details of designing and building a Windows NT bastion host.

Part III, "Internet Services", describes how to configure services in the firewall environment.

  • Chapter 13, "Internet Services and Firewalls", describes the general issues involved in selecting and configuring services in the firewall environment.
  • Chapter 14, "Intermediary Protocols", discusses basic protocols that are used by multiple services.
  • Chapter 15, "The World Wide Web", discusses the Web and related services.
  • Chapter 16, "Electronic Mail and News", discusses services used for transferring electronic mail and Usenet news.
  • Chapter 17, "File Transfer, File Sharing, and Printing", discusses the services used for moving files from one place to another.
  • Chapter 18, "Remote Access to Hosts", discusses services that allow you to use one computer from another computer.
  • Chapter 19, "Real-Time Conferencing Services", discusses services that allow people to interact with each other online.
  • Chapter 20, "Naming and Directory Services", discusses the services used to distribute information about hosts and users.
  • Chapter 21, "Authentication and Auditing Services", discusses services used to identify users before they get access to resources, to keep track of what sort of access they should have, and to keep records of who accessed what and when.
  • Chapter 22, "Administrative Services", discusses other services used to administer machines and networks.
  • Chapter 23, "Databases and Games", discusses the remaining two major classes of popular Internet services, databases and games.
  • Chapter 24, "Two Sample Firewalls", presents two sample configurations for basic firewalls.

Part IV, "Keeping Your Site Secure", describes how to establish a security policy for your site, maintain your firewall, and handle the security problems that may occur with even the most effective firewalls.

  • Chapter 25, "Security Policies", discusses the importance of having a clear and well-understood security policy for your site, and what that policy should and should not contain. It also discusses ways of getting management and users to accept the policy.
  • Chapter 26, "Maintaining Firewalls", describes how to maintain security at your firewall over time and how to keep yourself aware of new Internet security threats and technologies.
  • Chapter 27, "Responding to Security Incidents", describes what to do when a break-in occurs, or when you suspect that your security is being breached.

Part V, "Appendixes", consists of the following summary appendixes:

  • Appendix A, "Resources", contains a list of places you can go for further information and help with Internet security: World Wide Web pages, FTP sites, mailing lists, newsgroups, response teams, books, papers, and conferences.
  • Appendix B, "Tools", summarizes the best freely available firewall tools and how to get them.
  • Appendix C, "Cryptography", contains background information on cryptography that is useful to anyone trying to decrypt the marketing materials for security products.

DNS and BIND Fourth Edition

By Paul Albitz and Cricket Liu

The Domain Name System is a distributed database. This allows local control of the segments of the overall database, yet the data in each segment is available across the entire network through a client-server scheme. Robustness and adequate performance are achieved through replication and caching.
Programs called name servers constitute the server half of DNS's client-server mechanism. Name servers contain information about some segments of the database and make it available to clients, called resolvers. Resolvers are often just library routines that create queries and send them across a network to a name server.
The structure of the DNS database is very similar to the structure of the Unix filesystem, as shown in Figure 1-1. The whole database (or filesystem) is pictured as an inverted tree, with the root node at the top. Each node in the tree has a text label, which identifies the node relative to its parent. This is roughly analogous to a "relative pathname" in a filesystem, like bin. One label -- the null label, or "" -- is reserved for the root node. In text, the root node is written as a single dot ( .). In the Unix filesystem, the root is written as a slash ( / ).
The first implementation of the Domain Name System was called JEEVES, written by Paul Mockapetris himself. A later implementation was BIND, an acronym for Berkeley Internet Name Domain, which was written for Berkeley's 4.3 BSD Unix operating system by Kevin Dunlap. BIND is now maintained by the Internet Software Consortium.
BIND is the implementation we'll concentrate on in this book and is by far the most popular implementation of DNS today. It has been ported to most flavors of Unix and is shipped as a standard part of most vendors' Unix offerings. BIND has even been ported to Microsoft's Windows NT.
The fourth edition of this book deals with the new 9.1.0 and 8.2.3 versions of BIND as well as the older 4.9 versions. While 9.1.0 and 8.2.3 are the most recent versions as of this writing, they haven't made their way into many vendors' versions of Unix yet, partly because both versions have only recently been released and many vendors are wary of using such new software. We also occasionally mention other versions of BIND, especially 4.8.3, because many vendors continue to ship code based on this older software as part of their Unix products. Whenever a feature is available only in the 4.9, 8.2.3, or 9.1.0 version, or when there is a difference in the behavior of the versions, we try to point out which version does what.
We use nslookup, a name server utility program, very frequently in our examples. The version we use is the one shipped with the 8.2.3 BIND code. Older versions of nslookup provide much, but not quite all, of the functionality in the 8.2.3 nslookup. We've used commands common to most nslookup sin most of our examples; when this was not possible, we tried to note it.

Network Troubleshooting Tools First Edition

by Joseph D. Sloan

This book is not a general introduction to network troubleshooting. Rather, it is about one aspect of troubleshooting -- information collection. This book is a tutorial introduction to tools and techniques for collecting information about computer networks. It should be particularly useful when dealing with network problems, but the tools and techniques it describes are not limited to troubleshooting. Many can and should be used on a regular basis regardless of whether you are having problems.
Some of the tools I have selected may be a bit surprising to many. I strongly believe that the best approach to troubleshooting is to be proactive, and the tools I discuss reflect this belief. Basically, if you don't understand how your network works before you have problems, you will find it very difficult to diagnose problems when they occur. Many of the tools described here should be used before you have problems. As such, these tools could just as easily be classified as network management or network performance analysis tools.
This book does not attempt to catalog every possible tool. There are simply too many tools already available, and the number is growing too rapidly. Rather, this book focuses on the tools that I believe are the most useful, a collection that should help in dealing with almost any problem you see. I have tried to include pointers to other relevant tools when there wasn't space to discuss them. In many cases, I have described more than one tool for a particular job. It is extremely rare for two tools to have exactly the same features. One tool may be more useful than another, depending on circumstances. And, because of the differences in operating systems, a specific tool may not be available on every system. It is worth knowing the alternatives.
The book is about freely available Unix tools. Many are open source tools covered by GNU- or BSD-style licenses. In selecting tools, my first concern has been availability. I have given the highest priority to the standard Unix utilities. Next in priority are tools available as packages or ports for FreeBSD or Linux. Tools requiring separate compilation or available only as binaries were given a lower priority since these may be available on fewer systems. In some cases, PC-only tools and commercial tools are noted but are not discussed in detail. The bulk of the book is specific to Ethernet and TCP/IP, but the general approach and many of the tools can be used with other technologies.
While this is a book about Unix tools, at the end of most of the chapters I have included a brief section for Microsoft Windows users. These sections are included since even small networks usually include a few computers running Windows. These sections are not, even in the wildest of fantasies, meant to be definitive. They are provided simply as starting points -- a quick overview of what is available.
Finally, this book describes a wide range of tools. Many of these tools are designed to do one thing and are often overlooked because of their simplicity. Others are extremely complex tools or sets of tools. I have not attempted to provide a comprehensive treatment for each tool discussed. Some of these tools can be extremely complex when used to their fullest. Some have manuals and other documentation that easily exceed the size of this book. Most have additional documentation that you will want to retrieve once you begin using them.
My goal is to make you aware of the tools and to provide you with enough information that you can decide which ones may be the most useful to you and in what context so that you can get started using the tools. Each chapter centers on a collection of related tasks or problems and tools useful for dealing with these tasks. The discussion is limited to features that are relevant to the problem being discussed. Consequently, the same tool may be discussed in several places throughout the book.
Please be warned: the suitability or behavior of these tools on your system cannot be guaranteed. While the material in this book is presented in good faith, neither the author nor O'Reilly & Associates makes any explicit or implied warranty as to the behavior or suitability of these tools. We strongly urge you to assess and evaluate these tool as appropriate for your circumstances.
Click to Read More/Download

Network Troubleshooting Tools (O'Reilly System Administration)


Essential SNMP First Edition

by Douglas R. Mauro and Kevin J. Schmidt

The Simple Network Management Protocol (SNMP) is an Internet-standard protocol for managing devices on IP networks. Many kinds of devices support SNMP, including routers, switches, servers, workstations, printers, modem racks, and uninterruptible power supplies (UPSs). The ways you can use SNMP range from the mundane to the exotic: it's fairly simple to use SNMP to monitor the health of your routers, servers, and other pieces of network hardware, but you can also use it to control your network devices and even send pages or take other automatic action if problems arise. The information you can monitor ranges from relatively simple and standardized items, like the amount of traffic flowing into or out of an interface, to more esoteric hardware- and vendor-specific items, like the air temperature inside a router.
Given that there are already a number of books about SNMP in print, why write another one? Although there are many books on SNMP, there's a lack of books aimed at the practicing network or system administrator. Many books cover how to implement SNMP or discuss the protocol at a fairly abstract level, but none really answers the network administrator's most basic questions: How can I best put SNMP to work on my network? How can I make managing my network easier?
We provide a brief overview of the SNMP protocol in Chapter 2, "A Closer Look at SNMP" then spend a few chapters discussing issues such as hardware requirements and the sorts of tools that are available for use with SNMP. However, the bulk of this book is devoted to discussing, with real examples, how to use SNMP for system and network administration tasks.
Most newcomers to SNMP ask some or all of the following questions:
  • What exactly is SNMP?
  • How can I, as a system or network administrator, benefit from SNMP?
  • What is a MIB?
  • What is an OID?
  • What is a community string?
  • What is a trap?
  • I've heard that SNMP is insecure. Is this true?
  • Do any of my devices support SNMP? If so, how can I tell if they are configured properly?
  • How do I go about gathering SNMP information from a device?
  • I have a limited budget for purchasing network-management software. What sort of free/open source software is available?
  • Is there an SNMP Perl module that I can use to write cool scripts?
This book answers all these questions and more. Our goal is to demystify SNMP and make it more accessible to a wider range of users.

Click to Read More

Managing NFS and NIS Second Edition

by Hal Stern, Mike Eisler and Ricardo Labiaga

This book is of interest to system administrators and network managers who are installing or planning new NFS and NIS networks, or debugging and tuning existing networks and servers. It is also aimed at the network user who is interested in the mechanics that hold the network together.
We'll assume that you are familiar with the basics of Unix system administration and TCP/IP networking. Terms that are commonly misused or particular to a discussion will be defined as needed. Where appropriate, an explanation of a low-level phenomenon, such as Ethernet congestion will be provided if it is important to a more general discussion such as NFS performance on a congested network. Models for these phenomena will be drawn from everyday examples rather than their more rigorous mathematical and statistical roots.
This book focuses on the way NFS and NIS work, and how to use them to solve common problems in a distributed computing environment. Because Sun Microsystems developed and continues to innovate NFS and NIS, this book uses Sun's Solaris operating system as the frame of reference. Thus if you are administering NFS on non-Solaris systems, you should use this book in conjunction with your vendor's documentation, since utilities and their options will vary by implementation and release. This book explains what the configuration files and utilities do, and how their options affect performance and system administration issues. By walking through the steps comprising a complex operation or by detailing each step in the debugging process, we hope to shed light on techniques for effective management of distributed computing environments. There are very few absolute constraints or thresholds that are universally applicable, so we refrain from stating them. This book should help you to determine the fair utilization and performance constraints for your network.
Click to Read More

SSH: The Secure Shell - The Definitive Guide

by Daniel J. Barrett and Richard E. Silverman
Privacy is a basic human right, but on today's computer networks, privacy isn't guaranteed. Much of the data that travels on the Internet or local networks is transmitted as plain text, and may be captured and viewed by anybody with a little technical know-how. The email you send, the files you transmit between computers, even the passwords you type may be readable by others. Imagine the damage that can be done if an untrusted third party -- a competitor, the CIA, your in-laws -- intercepted your most sensitive communications in transit.
Network security is big business as companies scramble to protect their information assets behind firewalls, establish virtual private networks (VPNs), and encrypt files and transmissions. But hidden away from all the bustle, there is a small, unassuming, yet robust solution many big companies have missed. It's reliable, reasonably easy to use, cheap, and available for most of today's operating systems.
It's SSH, the Secure Shell.

TCP/IP Network Administration Third Edition

by Craig Hunt
The first edition of TCP/IP Network Administration was written in 1992. In the decade since, many things have changed, yet some things remain the same. TCP/IP is still the preeminent communications protocol for linking together diverse computer systems. It remains the basis of interoperable data communications and global computer networking. The underlying Internet Protocol (IP), Transmission Control Protocol, and User Datagram Protocol (UDP) are remarkably unchanged. But change has come in the way TCP/IP is used and how it is managed.
A clear symbol of this change is the fact that my mother-in-law has a TCP/IP network connection in her home that she uses to exchange electronic mail, compressed graphics, and hypertext documents with other senior citizens. She thinks of this as "just being on the Internet," but the truth is that her small system contains a functioning TCP/IP protocol stack, manages a dynamically assigned IP address, and handles data types that did not even exist a decade ago.
In 1991, TCP/IP was a tool of sophisticated users. Network administrators managed a limited number of systems and could count on the users for a certain level of technical knowledge. No more. In 2002, the need for highly trained network administrators is greater than ever because the user base is larger, more diverse, and less capable of handling technical problems on its own. This book provides the information needed to become an effective TCP/IP network administrator.
TCP/IP Network Administration was the first book of practical information for the professional TCP/IP network administrator, and it is still the best. Since the first edition was published there has been an explosion of books about TCP/IP and the Internet. Still, too few books concentrate on what a system administrator really needs to know about TCP/IP administration. Most books are either scholarly texts written from the point of view of the protocol designer, or instructions on how to use TCP/IP applications. All of those books lack the practical, detailed network information needed by the Unix system administrator. This book strives to focus on TCP/IP and Unix and to find the right balance of theory and practice.
I am proud of the earlier editions of TCP/IP Network Administration. In this edition, I have done everything I can to maintain the essential character of the book while making it better. Dynamic address assignment based on Dynamic Host Configuration Protocol (DHCP) is covered. The Domain Name System material has been updated to cover BIND 8 and, to a lesser extent, BIND 9. The email configuration is based on current version of sendmail 8, and the operating system examples are from the current versions of Solaris and Linux. The routing protocol coverage includes Routing Information Protocol version 2 (RIPv2), Open Shortest Path First (OSPF), and Border Gateway Protocol (BGP). I have also added a chapter on Apache web server configuration, new material on xinetd, and information about building a firewall with iptables. Despite the additional topics, the book has been kept to a reasonable length.
TCP/IP is a set of communications protocols that define how different types of computers talk to each other. TCP/IP Network Administration is a book about building your own network based on TCP/IP. It is both a tutorial covering the "why" and "how" of TCP/IP networking, and a reference manual for the details about specific network programs.

Building Internet Firewalls First Edition

By D. Brent Chapman and Elizabeth D. Zwicky
This book is a practical guide to building your own firewall. It provides step-by-step explanations of how to design and install a firewall at your site, and how to configure Internet services such as electronic mail, FTP, the World Wide Web, and others to work with a firewall. Firewalls are complex, though, and we can't boil everything down to simple rules. Too much depends on exactly what hardware, operating system, and networking you are using at your site, and what you want your users to be able to do, and not do. We've tried to give you enough rules, examples, and resources here so you'll be able to do the rest on your own.
What is a firewall, and what does it do for you? A firewall is a way to restrict access between the Internet and your internal network. You typically install a firewall at the point of maximum leverage, the point where your network connects to the Internet. The existence of a firewall at your site can greatly reduce the odds that outside attackers will penetrate your internal systems and networks. The firewall can also keep your own users from compromising your systems by sending dangerous information - unencrypted passwords and sensitive data - to the outside world.
The attacks on Internet-connected systems we are seeing today are more serious and more technically complex than those in the past. To keep these attacks from compromising our systems, we need all the help we can get. Firewalls are a highly effective way of protecting your site from these attacks. For that reason, we strongly recommend you include a firewall in your site's overall Internet security plan. However, a firewall should be only one component in that plan. It's also vital that you establish a security policy, that you implement strong host security, and that you consider the use of authentication and encryption devices that work with the firewalls you install. This book will touch on each of these topics while maintaining its focus on firewalls.

Sendmail Desktop Reference First Edition

By Bryan Costales and Eric Allman
The sendmail program is a Mail Transport Agent (MTA). It accepts mail from Mail User Agents (MUAs), mail users (humans), and other MTAs. It then delivers that mail to Mail Delivery Agents (MDAs) on the local machine, or transports that mail to another MTA at another machine. The behavior of sendmail is determined by its command line and by commands in its configuration file.
The sendmail program is written and maintained by Eric Allman at sendmail.org. Versions V8.7 and earlier are no longer supported and are no longer considered secure. If you are not currently running V8.8, we recommend you upgrade now. This Desktop Reference covers sendmail version 8.8.5.
This Desktop Reference is a companion to the second edition of the sendmail book by Bryan Costales with Eric Allman, published by O'Reilly & Associates. Section numbers herein reference the section numbers in that book. This is a reference guide only - for detail or tutorial information, refer to the full sendmail book.

TCP/IP Network Administration Second Edition

By Craig Hunt
The protocol wars are over and TCP/IP won. TCP/IP is now universally recognized as the pre-eminent communications protocol for linking together diverse computer systems. The importance of interoperable data communications and global computer networks is no longer debated. But that was not always the case. When I wrote the first edition of this book, IPX was far and away the leading PC communications protocol. Microsoft did not bundle communications protocols in their operating system. Corporate networks were so dependent on SNA that many corporate network administrators had not even heard of TCP/IP. Even UNIX, the mother of TCP/IP, nursed a large number of pure UUCP networks. Back then I felt compelled to tout the importance of TCP/IP by pointing out that it was used on thousands of networks and hundreds of thousands of computers. How times have changed! Today we count the hosts and users connected to the Internet in the tens of millions. And the Internet is only the tip of the TCP/IP iceberg. The largest market for TCP/IP is in the corporate "intranet." An intranet is a private TCP/IP network used to disseminate information within the enterprise. The competing network technologies have shrunk to niche markets where they fill special needs - while TCP/IP has grown to be the communications software that links the world.
The acceptance of TCP/IP as a worldwide standard and the size of its global user base are not the only things that have changed. In 1991 I lamented the lack of adequate documentation. At the time it was difficult for a network administrator to find the information he or she needed to do the job. Since that time there has been an explosion of books about TCP/IP and the Internet. However, there are still too few books that concentrate on what a system administrator really needs to know about TCP/IP administration and too many books that try to tell you how to surf the Web. In this book I strive to focus on TCP/IP and UNIX, and not to be distracted by the phenomenon of the Internet.
I am very proud of the first edition of TCP/IP Network Administration. In the second edition, I have done everything I can to maintain the essential character of the book while making it better. The Domain Name Service material has been updated to cover the latest version of the BIND 4 software. The email configuration is now based on sendmail version 8, and the operating system examples are from the current versions of Solaris and Linux. The routing protocol coverage has been expanded to include Open Shortest Path First (OSPF) and Border Gateway Protocol (BGP). I have also added new topics such as one-time passwords and configuration servers based on Dynamic Host Configuration Protocol (DHCP) and Bootstrap Protocol (BOOTP). Despite the additional topics, the book has been kept to a reasonable length.
The bulk of this edition is derived directly from the first edition of the book. To emphasize both that times have changed and that my focus on practical information has not, I have left the introductory paragraphs from the first edition intact.

DNS and BIND Third Edition

By Cricket Liu & Paul Albitz
You may not know much about the Domain Name System - yet - but whenever you use the Internet, you use DNS. Every time you send electronic mail or surf the World Wide Web, you rely on the Domain Name System.
You see, while you, as a human being, prefer to remember the names of computers, computers like to address each other by number. On an internet, that number is 32 bits long, or between zero and four billion or so.[1] That's easy for a computer to remember, because computers have lots of memory ideal for storing numbers, but it isn't nearly as easy for us humans. Pick ten phone numbers out of the phone book at random, and then try to remember them. Not easy? Now flip to the front of the book and attach random area codes to the phone numbers. That's about how difficult it would be to remember ten arbitrary internet addresses.
[1] And, with IP version 6, it's soon to be a whopping 128 bits long, or between zero and a decimal number with 39 digits.
This is part of the reason we need the Domain Name System. DNS handles mapping between host names, which we humans find convenient, and internet addresses, which computers deal with. In fact, DNS is the standard mechanism on the Internet for advertising and accessing all kinds of information about hosts, not just addresses. And DNS is used by virtually all internetworking software, including electronic mail, remote terminal programs such as telnet, file transfer programs such as ftp, and web browsers such as Netscape Navigator and Microsoft Internet Explorer.
Another important feature of DNS is that it makes host information available all over the Internet. Keeping information about hosts in a formatted file on a single computer only helps users on that computer. DNS provides a means of retrieving information remotely, from anywhere on the network.
More than that, DNS lets you distribute the management of host information among many sites and organizations. You don't need to submit your data to some central site or periodically retrieve copies of the "master" database. You simply make sure your section, called a zone, is up to date on your name servers. Your name servers make your zone's data available to all the other name servers on the network.
Because the database is distributed, the system also needs the ability to locate the data you're looking for by searching a number of possible locations. The Domain Name System gives name servers the intelligence to navigate through the database and find data in any zone.
Of course, DNS does have a few problems. For example, the system allows more than one name server to store data about a zone, for redundancy's sake. But inconsistencies can crop up between copies of the zone data.
But the worst problem with DNS is that despite its widespread use on the Internet, there's really very little documentation about managing and maintaining it. Most administrators on the Internet make do with the documentation their vendors see fit to provide, and with whatever they can glean from following the Internet mailing lists and Usenet newsgroups on the subject.
This lack of documentation means that the understanding of an enormously important internet service - one of the linchpins of today's Internet - is either handed down from administrator to administrator like a closely-guarded family recipe, or relearned repeatedly by isolated programmers and engineers. New administrators of domains suffer through the same mistakes made by countless others.
Our aim with this book is to help remedy this situation. We realize that not all of you have the time or the desire to become DNS experts. Most of you, after all, have plenty to do besides managing a domain or a name server: system administration, network engineering, or software development. It takes an awfully big institution to devote a whole person to DNS. We'll try to give you enough information to allow you to do what you need to do, whether that's running a small domain or managing a multinational monstrosity, tending a single name server or shepherding a hundred of them. Read as much as you need to know now, and come back later if you need to know more.
DNS is a big topic - big enough to require two authors, anyway - but we've tried to present it as sensibly and understandably as possible. The first two chapters give you a good theoretical overview and enough practical information to get by, and later chapters fill in the nitty-gritty details. We provide a roadmap up front, to suggest a path through the book appropriate for your job or interest.
When we talk about actual DNS software, we'll concentrate almost exclusively on BIND, the Berkeley Internet Name Domain software, which is the most popular implementation of the DNS specs (and the one we know best). We've tried to distill our experience in managing and maintaining a domain with BIND into this book - a domain, incidentally, that is one of the largest on the Internet. (We don't mean to brag, but we can use the credibility.) Where possible, we've included the real programs that we use in administration, many of them rewritten into Perl for speed and efficiency.
We hope that this book will help you get acquainted with DNS and BIND if you're just starting out, let you refine your understanding if you're already familiar with them, and provide valuable insight and experience even if you know 'em like the back of your hand.

Rabu, 14 Februari 2007

Maximum Security: Hacker's Guide to Protecting Your Internet Site and Network

Hacking and cracking are activities that generate intense public interest. Stories of hacked servers and downed Internet providers appear regularly in national news. Consequently, publishers are in a race to deliver books on these subjects. To its credit, the publishing community has not failed in this resolve. Security books appear on shelves in ever-increasing numbers. However, the public remains wary. Consumers recognize driving commercialism when they see it, and are understandably suspicious of books such as this one. They need only browse the shelves of their local bookstore to accurately assess the situation.
Books about Internet security are common (firewall technology seems to dominate the subject list). In such books, the information is often sparse, confined to a narrow range of products. Authors typically include full-text reproductions of stale, dated documents that are readily available on the Net. This poses a problem, mainly because such texts are impractical. Experienced readers are already aware of these reference sources, and inexperienced ones are poorly served by them. Hence, consumers know that they might get little bang for their buck. Because of this trend, Internet security books have sold poorly at America's neighborhood bookstores.
Another reason that such books sell poorly is this: The public erroneously believes that to hack or crack, you must first be a genius or a UNIX guru. Neither is true, though admittedly, certain exploits require advanced knowledge of the target's operating system. However, these exploits can now be simplified through utilities that are available for a wide range of platforms. Despite the availability of such programs, however, the public remains mystified by hacking and cracking, and therefore, reticent to spend forty dollars for a hacking book.
So, at the outset, Sams.net embarked on a rather unusual journey in publishing this book. The Sams.net imprint occupies a place of authority within the field. Better than two thirds of all information professionals I know have purchased at least one Sams.net product. For that reason, this book represented to them a special situation.
Hacking, cracking, and Internet security are all explosive subjects. There is a sharp difference between publishing a primer about C++ and publishing a hacking guide. A book such as this one harbors certain dangers, including
  • The possibility that readers will use the information maliciously
  • The possibility of angering the often-secretive Internet-security community


  • The possibility of angering vendors that have yet to close security holes within their software

Wireless LAN Communications

This document presents an overview of two IBM wireless LAN products, IBMWireless LAN Entry and IBM Wireless LAN, and the technology they use for wireless communications. The information provided includes product descriptions, features and functions. Some known product limitations as well as a cross-product comparison are included to assist the reader in understanding where and which product to use for given circumstances.
Also documented are examples of product setup, configuration and the development of various scenarios conducted by the authors. Our intended audience is customers, network planners, network administrators and system specialists who have a need to evaluate, implement and maintain wireless networks. A basic understanding of LAN communications terminology and familiarity with common IBM and industry network products and tools is assumed.

Netizens On the History and Impact of the Net

By Michael Hauben and Ronda Hauben

Introduction By Thomas Truscott

Netizens: On the Impact and History of Usenet and the Internet is an ambitious look at the social aspects of computer networking. It examines the present and the turbulent future, and especially it explores the technical and social roots of the "Net". A well told history can be entertaining, and an accurately told history can provide us valuable lessons. Here follow three lessons for inventors and a fourth for social engineers. Please test them out when reading the book.
The first lesson is to keep projects simple at the beginning. Projects tend to fail so the more one can squeeze into a year the better the chance of stumbling onto a success. Big projects do happen, but there is not enough time in life for very many of them, so choose carefully.
The second lesson is to innovate by taking something old and something new and putting them together in a new way. In this book the "something new" is invariably the use of a computer network. For example, ancient timesharing computer systems had local "mail" services so its users could communicate. But the real power of E-mail was when mail could be distributed to distant computers and all the networked users could communicate. Similarly, Usenet is a distributed version of preexisting bulletin-board-like systems. The spectacularly successful World Wide Web is just a distributed version of a hypertext document system. It was remarkably simple, and seemingly obvious, yet it caught the world by complete surprise. Here is another way to state this lesson: If a feature is good, then a distributed version of the feature is good. And vice-versa.
The third lesson is to keep on the lookout for "something new", or for something improved enough to make a qualitative difference. For example, in the future we will have home computers that are always on and connected to the Net. That is a qualitative difference that will trigger numerous innovations.
The fourth lesson is that we learn valuable lessons by trying out new innovations. Neither the original ARPAnet nor Usenet would have been commercially viable. Today there are great forces battling to structure and control the information superhighway, and it is invaluable that the Internet and Usenet exist as working models. Without them it would be quite easy to argue that the information superhighway should have a top-down hierarchical command and control structure. After all there are numerous working models for that.
It seems inevitable that new innovations will continue to make the future so bright that it hurts. And it also seems inevitable that as innovations permeate society the rules for them will change. I am confident that Michael Hauben and Ronda Hauben will be there to chronicle the rapidly receding history and the new future, as "Netizens" increasingly becomes more than a title for a book.

Looking Over the Fence at Networks: A Neighbor's View of Networking Research (2001)

The Internet has been highly successful in meeting the original vision of providing ubiquitous computer-to-computer interaction in the face of heterogeneous underlying technologies. No longer a research plaything, the Internet is widely used for production systems and has a very large installed base. Commercial interests play a major role in shaping its ongoing development. Success, however, has been a double-edged sword, for with it has come the danger of ossification, or inability to change, in multiple dimensions:
  • Intellectual ossification—The pressure for compatibility with the current Internet risks stifling innovative intellectual thinking. For example, the frequently imposed requirement that new protocols not compete unfairly with TCP-based traffic constrains the development of alternatives for cooperative resource sharing. Would a paper on the NETBLT protocol that proposed an alternative approach to control called “rate-based” (in place of “window-based”) be accepted for publication today?
  • Infrastructure ossification—The ability of researchers to affect what is deployed in the core infrastructure (which is operated mainly by businesses) is extremely limited. For example, pervasive network-layer multicast remains unrealized, despite considerable research and efforts to transfer that research to products.
  • System ossification—Limitations in the current architecture have led to shoe-horn solutions that increase the fragility of the system. For example, network address translation violates architectural assumptions about the semantics of addresses. The problem is exacerbated because a research result is often judged by how hard it will be to deploy in the Internet, and the Internet service providers sometimes favor more easily deployed approaches that may not be desirable solutions for the long run.

At the same time, the demands of users and the realities of commercial interests present a new set of challenges that may very well require a fresh approach. The Internet vision of the last 20 years has been to have all computers communicate. The ability to hide the details of the heterogeneous underlying technologies is acknowledged to be a great strength of the design, but it also creates problems because the performance variability associated with underlying network capacity, time-varying loads, and the like means that applications work in some circumstances but not others. More generally, outsiders advocated a more user-centric view of networking research—a perspective that resonated with a number of the networking insiders as well. Drawing on their own experiences, insiders commented that users are likely to be less interested in advancing the frontiers of high communications bandwidth and more interested in consistency and quality of experience, broadly defined to include the “ilities”—reliability, manageability, configurability, predictability, and so forth—as well as non-performance-based concerns such as security and privacy. (Interest was also expressed in higher-performance, broadband last-mile access, but this is more of a deployment issue than a research problem.) Outsiders also observed that while as a group they may share some common requirements, users are very diverse—in experience, expertise, and what they wish the network could do. Also, commercial interests have given rise to more diverse roles and complex relationships that cannot be ignored when developing solutions to current and future networking problems. These considerations argue that a vision for the future Internet should be to provide users the quality of experience they seek and to accommodate a diversity of interests.

Click to Read More

An Introduction to Socket Programming

By Reg Quinton
These course notes are directed at Unix application programmers who want to develop client/server applications in the TCP/IP domain (with some hints for those who want to write UDP/IP applications). Since the Berkeley socket interface has become something of a standard these notes will apply to programmers on other platforms.
Fundamental concepts are covered including network addressing, well known services, sockets and ports. Sample applications are examined with a view to developing similar applications that serve other contexts. Our goals are
  • to develop a function, tcpopen(server,service), to connect to service.
  • to develop a server that we can connect to.

This course requires an understanding of the C programming language and an appreciation of the programming environment (ie. compilers, loaders, libraries, Makefiles and the RCS revision control system).

Netstat Observations:
Inter Process Communication (or IPC) is between host.port pairs (or host.service if you like). A process pair uses the connection -- there are client and server applications on each end of the IPC connection.

Note the two protocols on IP -- TCP (Transmission Control Protocol) and UDP (User Datagram Prototocol). There's a third protocl ICMP (Internet Control Message Protocol) which we'll not look at -- it's what makes IP work in the first place!

TCP services are connection orientated (like a stream, a pipe or a tty like connection) while UDP services are connectionless (more like telegrams or letters).

We recognize many of the services -- SMTP (Simple Mail Transfer Protocol as used for E-mail), NNTP (Network News Transfer Protocol service as used by Usenet News), NTP (Network Time Protocol as used by xntpd(8)), and SYSLOG is the BSD service implemented by syslogd(1M).

The netstat(1M) display shows many TCP services as ESTABLISHED (there is a connection between client.port and server.port) and others in a LISTEN state (a server application is listening at a port for client connections). You'll often see connections in a CLOSE_WAITE state -- they're waiting for the socket to be torn down.

Click to Read More

Introduction to Securing Data in Transit

The secure transmission of data in transit relies on both encryption and authentication - on both the hiding or concealment of the data itself, and on ensuring that the computers at each end are the computers they say they are.
Authentication
Authentication is a difficult task - computers have no way of knowing that they are 'the computer that sits next to the printer on the third floor' or 'the computer that runs the sales for www.dotcom.com'. And those are the matters which are important to humans - humans don't care if the computer is '10.10.10.10', which is what the computers know.
However, if the computer can trust the human to tell it which computer address to look for - either in the numeric or the name form - the computers can then verify that each other is, in fact, the computer at that address. It's similar to using the post office - we want to know if 100 Somewhere Street is where our friend Sally is, but the post office just wants to know where to send the parcel.
The simplest form of authentication is to exchange secret information the first time the two computers communicate and check it on each subsequent connection. Most exchanges between computers take place over a long period of time, in computer terms, so they tend to do this in a small way for the duration of each connection - as if you were checking, each time you spoke in a phone call, that the person you were talking to was still that person. (Sally, is that you? Yeah. Good, now I was telling you about the kids .. is that still you?)
It may sound paranoid, but this sort of verification system can inhibit what is called a 'man in the middle' attack - where a third party tries to 'catch' the connection and insert their own information. Of course, this relies on the first communication not being intercepted.
Public key encryption (see below) is the other common means of authentication. It doesn't authenticate the sender, but it does authenticate the receiver - and if both parties exchange public keys, and verify by some independant means that the key they have is the key of the party they wish to send to, it authenticates both.

Introduction to Networking Technologies

There are many different computing and networking technologies -- some available today, some just now emerging, some well-proven, some quite experimental. Understanding the computing dilemma more completely involves recognizing technologies; especially since a single technology by itself seldom suffices, and instead, multiple technologies are usually necessary.
This document describes a sampling of technologies of various types, by using a tutorial approach. It compares the technologies available in the three major technology areas: application support, transport networks, and subnetworking. In addition, the applicability of these technologies within a particular situation is illustrated using a set of typical customer situations.
This document can be used by consultants and system designers to better understand, from a business and technical perspective, the options available to solve customers' networking problems.

Introduction to Intrusion Protection and Network Security

If your computer is not connected to any other computers and doesn't have a modem, the only way anyone can access your computer's information is by physically coming to the computer and sitting at it. So securing the room it's in will secure the computer. As soon as your computer is connected to another computer you add the possibility that someone using the other computer can access your computer's information.
If your network (your connected computers) consists only of other computers in the same building you can still secure the network by securing the rooms the computers are in. An example of this would be two computers sharing the same files and printer, but not having a modem and not being connected to any other computers.
However, it's wise to learn about other ways to secure a network of connected computers, in case you add something later. Networks have a tendency to grow. If you have a network, an intruder who gains access to one computer has at least some access to all of them.
Note: Note that once someone has physical access to your computer, there are a number of ways that they can access your information. Most systems have some sort of emergency feature that allows someone with physical access to get in and change the superuser password, or access the data. Even if your system doesn't have that, or it's disabled, they can always just pick up the computer or remove the hard drive and carry it out. More on this in the physical security article.

Introduction to the Internet Protocols

This is an introduction to the Internet networking protocols (TCP/IP).It includes a summary of the facilities available and briefdescriptions of the major protocols in the family.
What is TCP/IP?
TCP/IP is a set of protocols developed to allow cooperating computersto share resources across a network. It was developed by a communityof researchers centered around the ARPAnet. Certainly the ARPAnet isthe best-known TCP/IP network. However as of June, 87, at least 130different vendors had products that support TCP/IP, and thousands ofnetworks of all kinds use it.
First some basic definitions. The most accurate name for the set ofprotocols we are describing is the "Internet protocol suite". TCP andIP are two of the protocols in this suite. (They will be describedbelow.) Because TCP and IP are the best known of the protocols, ithas become common to use the term TCP/IP or IP/TCP to refer to thewhole family. It is probably not worth fighting this habit. Howeverthis can lead to some oddities. For example, I find myself talkingabout NFS as being based on TCP/IP, even though it doesn't use TCP atall. (It does use IP. But it uses an alternative protocol, UDP,instead of TCP. All of this alphabet soup will be unscrambled in thefollowing pages.)
The Internet is a collection of networks, including the Arpanet,NSFnet, regional networks such as NYsernet, local networks at a numberof University and research institutions, and a number of militarynetworks. The term "Internet" applies to this entire set of networks.The subset of them that is managed by the Department of Defense isreferred to as the "DDN" (Defense Data Network). This includes someresearch-oriented networks, such as the Arpanet, as well as morestrictly military ones. (Because much of the funding for Internetprotocol developments is done via the DDN organization, the termsInternet and DDN can sometimes seem equivalent.) All of thesenetworks are connected to each other. Users can send messages fromany of them to any other, except where there are security or otherpolicy restrictions on access. Officially speaking, the Internetprotocol documents are simply standards adopted by the Internetcommunity for its own use. More recently, the Department of Defenseissued a MILSPEC definition of TCP/IP. This was intended to be a moreformal definition, appropriate for use in purchasing specifications.However most of the TCP/IP community continues to use the Internetstandards. The MILSPEC version is intended to be consistent with it.

Internetwork Troubleshooting Handbook

Because of the rapid and ongoing developments in the field of networking, accurate troubleshooting information is an ever sought-after commodity. Because of this, the Cisco Press Internetworking Troubleshooting Handbook is a valuable resource for networking professionals throughout the industry.
For the second edition of this book, we gathered together a team of troubleshooting experts who thoroughly revised the material in each of the technology areas to include the most current and relevant troubleshooting information and solutions available today. Their goal and ours was to provide networking professionals with a guide containing solutions to the problems encountered in the field in a format that is easy to apply. We hope that this publication meets that goal.
The Internetworking Troubleshooting Handbook was written as a resource for anyone working in the field of networking who needs troubleshooting reference information. We anticipate that the information in this publication will assist users in solving specific technology issues and problems that they encounter in their existing environments.

Internetworking over ATM: An Introduction

For the foreseeable future a significant percentage of devices usingan ATM network will do so indirectly, and will continue to be directly attached to "legacy" media (such as Ethernet and token ring). In addition these devices will continue to utilize "legacy" internetwork layer protocols (for example, IP, IPX, APPN, etc.). This means that inorder to effectively use ATM, there must be efficient methods available for operating multiple internetwork layer protocols over heterogeneous networks built from ATM switches, routers, and other switched devices. This challenge is commonly referred to as theoperation of multiprotocol over ATM.
This book reviews the various options for the transport and support of multiprotocols over ATM.
This book was written for networking consultants, systems specialists, system planners, network designers and network administrators who need to learn about SVN and associated protocols in order to design and deploy networks that utilize components from this framework. It provides readers with the ability to differentiate between the different offerings. A working knowledge of ATM is assumed.
It is intended to be used with "High-Speed Networking Technology: An Introductory Survey", which describes methods for data transmission in high speed networks, and "Asynchronous Transport Mode (ATM) Technical Overview", which describes ATM, a link-level protocol using the methods described in "High-Speed Networking Technology: An Introductory Survey"to transmit various types of data together over the same physical links. This book describes the networking protocols that use ATM as the underlying link level protocol.

High-Speed Networking Technology: An Introductory Survey

This publication presents a broad overview on the emerging technology ofvery-high-speed communication. It is written at the technical conceptuallevel with some areas of greater detail. It is intended to be read bycomputer professionals who have some understanding of communications(but who do not necessarily consider themselves experts).
The primary topics of the book are:-
  • The Principles of High-Speed Networking
  • Fibre Optical Technology and Optical Networks
  • Local Area Networks (Token-Ring, FDDI, MetaRing, CRMA,Radio LANs)
  • Metropolitan Area Networks (DQDB, SMDS)
  • High-Speed Packet Switches (Frame Relay, Paris, plaNET)
  • High-Speed Cell Switching (ATM)

Click to Download

Computer Networks and Internets

Contains various network component specifications and photos with explanation. Following networking topics are covered
  • Motivation and Tools
  • Network Programming And Applications
  • Transmission Media
  • Local Asynchronous Communication (RS-232)
  • Long-Distance Communication (Carriers, Modulation, And Modems)
  • Packets, Frames, And Error Detection
  • LAN Technologies And Network Topology
  • Hardware Addressing And Frame Type Identification
  • LAN Wiring, Physical Topology, And Interface Hardware
  • Extending LANs: Fiber Modems, Repeaters, Bridges, and Switches
  • Long-Distance And Local Loop Digital Technologies
  • WAN Technologies And Routing
  • Connection-Oriented Networking And ATM
  • Network Characteristics: Ownership, Service Paradigm, And Performance
  • Protocols And Layering
  • Internetworking: Concepts, Architecture, and Protocols
  • IP: Internet Protocol Addresses
  • Binding Protocol Addresses (ARP)
  • IP Datagrams And Datagram Forwarding
  • IP Encapsulation, Fragmentation, And Reassembly
  • The Future IP (IPv6)
  • An Error Reporting Mechanism (ICMP)
  • UDP: Datagram Transport Service
  • TCP: Reliable Transport Service
  • Network Address Translation
  • Internet Routing
  • Client-Server Interaction
  • The Socket Interface
  • Example Of A Client And A Server
  • Naming With The Domain Name System
  • Electronic Mail Representation And Transfer
  • IP Telephony (VoIP)
  • File Transfer And Remote File Access
  • World Wide Web Pages And Browsing
  • Dynamic Web Document Technologies (CGI, ASP, JSP, PHP, ColdFusion)
  • Active Web Document Technologies (Java, JavaScript)
  • RPC and Middleware
  • Network Management (SNMP)
  • Network Security
  • Initialization (Configuration)

Click to Read More

Computer Networks

By Hans-Peter Bischof
An introduction to the organization and structuring of computer networks.
The following quesions describe what will be covered in this course.
  • What is a computer network?
  • What can we do with a computer network?

Keywords: (IPethernet)-address, TCP/IP, UDP, router, bridge, socket, rpc, rpcgen, server, client, arp, rarp ...

Protocol Layering
Protocol layering is a common technique to simplify networking designs by dividing them into functional layers, and assigning protocols to perform each layer's task.

For example, it is common to separate the functions of data delivery and connection management into separate layers, and therefore separate protocols. Thus, one protocol is designed to perform data delivery, and another protocol, layered above the first, performs connection management. The data delivery protocol is fairly simple and knows nothing of connection management. The connection management protocol is also fairly simple, since it doesn't need to concern itself with data delivery.

Protocol layering produces simple protocols, each with a few well-defined tasks. These protocols can then be assembled into a useful whole. Individual protocols can also be removed or replaced.
The most important layered protocol designs are the Internet's original DoD model, and the OSI Seven Layer Model. The modern Internet represents a fusion of both models.

Click to Read More

Complete WAP Security

from Certicom

The Wireless Application Protocol (WAP) is a leading technology for companies trying to unlock the value of the Mobile Internet.
Certicom products and services provide complete WAP security solutions today for all of those players involved in bringing the Internet to the mobile end-user — including content providers, equipment manufacturers, network operators, application service providers and enterprises.
WAP
The WAP (Wireless Application Protocol) is a suite of specifications that enable wireless Internet applications; these specifications can be found at http://www.wapforum.org). WAP provides the framework to enable targeted Web access, mobile e-commerce, corporate intranet access, and other advanced services to digital wireless devices, including mobile phones, PDAs, two-way pagers, and other wireless devices. The suite of WAP specifications allows manufacturers, network operators, content providers and application developers to offer compatible products and services that work across varying types of digital devices and networks. Even for companies wary of WAP, individual elements of the WAP standards can prove useful by providing industry-standard wireless protocols and data formats.
The WAP architecture is based on the realization that for the near future, networks and client devices (e.g., mobile phones) will have limited capabilities. The networks will have bandwidth and latency limitations, and client devices will have limited processing, memory, power, display and user interaction capabilities. Therefore, Internet protocols cannot be processed as is; an adaptation for wireless environments is required. The entire suite of WAP specifications are derived from equivalent IETF specifications used on the Internet, modified for use within the limited capabilities in the wireless world.
Furthermore, the WAP model introduces a Gateway that translates between WAP and Internet protocols. This Gateway is typically located at the site of the mobile operator, although sometimes it may be run by an application service provider or enterprise.

BSD Sockets

This file contains examples of client and servers using several protocols, might be very usefull.
Sockets are a generalized networking capability first introduced in 4.1cBSD and subsequently refined into their current form with 4.2BSD. The sockets feature is available with most current UNIX system releases. (Transport Layer Interface (TLI) is the System V alternative). Sockets allow communication between two different processes on the same or different machines. Internet protocols are used by default for communication between machines; other protocols such as DECnet can be used if they are available.
To a programmer a socket looks and behaves much like a low level file descriptor. This is because commands such as read() and write() work with sockets in the same way they do with files and pipes. The differences between sockets and normal file descriptors occurs in the creation of a socket and through a variety of special operations to control a socket. These operations are different between sockets and normal file descriptors because of the additional complexity in establishing network connections when compared with normal disk access.
For most operations using sockets, the roles of client and server must be assigned. A server is a process which does some function on request from a client. As will be seen in this discussion, the roles are not symmetric and cannot be reversed without some effort.
This description of the use of sockets progresses in three stages:
The use of sockets in a connectionless or datagram mode between client and server processes on the same host. In this situation, the client does not explicitly establish a connection with the server. The client, of course, must know the server's address. The server, in turn, simply waits for a message to show up. The client's address is one of the parameters of the message receive request and is used by the server for response.
The use of sockets in a connected mode between client and server on the same host. In this case, the roles of client and server are further reinforced by the way in which the socket is established and used. This model is often referred to as a connection-oriented client-server model.
The use of sockets in a connected mode between client and server on different hosts. This is the network extension of Stage 2, above.
The connectionless or datagram mode between client and server on different hosts is not explicitly discussed here. Its use can be inferred from the presentations made in Stages 1 and 3.

Asynchronous Transfer Mode (ATM) Technical Overview

This publication presents a broad overview on the emerging technology of very high-speed communications. It is written at the "technical conceptual" level with some areas of greater detail.
It was written for computer professionals who have some understanding of communications (but who do not necessarily consider themselves experts).
The primary topics of the book are:
  • Asynchronous Transfer Mode (ATM)
  • High-Speed Cell Switching
  • Broadband ISDN

This publication is published by Prentice Hall and will be sold inexternal bookstores.

Click to Download

A new TCP congestion control with empty queues and scalable stability

By Fernando Paganini, Steven H. Low, ZhikuiWang, Sanjeewa Athuraliya and John C. Doyle

We describe a new congestion avoidance system designed to maintain dynamic stability on networks of arbitrary delay, capacity, and topology. This is motivated by recent work showing the limited stability margins of TCP Reno/RED as delay or network capacity scale up. Based on earlier work establishing mathematical requirements for local stability, we develop new flow control laws that satisfy these conditions together with a certain degree of fairness in bandwidth allocation. When a congestion measure signal from links to sources is available, the system can satisfy also the key objectives of high utilization and emptying the network queues in equilibrium.
We develop a packet-level implementation of this protocol, where the congestion measure is communicated back to sources via random exponential marking of an ECN bit. We discuss parameter choices for the marking and estimation system, and demonstrate using ns-2 simulations the stability of the protocol, and the nearempty equilibrium queues, for a wide range of delays. Comparisons with the behavior of Reno /RED are provided. We also explore the situation where ECN is not used, and instead queueing delay is used as a pricing signal. This alternative protocol is also stable, but will inevitably exhibit nontrivial queues.

A Comprehensive Guide to Virtual Private Networks, Volume III: Cross-Platform Key and Policy Management

This redbook closely examines the functionality of the Internet Key Exchange protocol (IKE) - which is derived from the Internet Security Associations Key Management Protocol (ISAKMP) and the Oakley protocol. IKE provides a framework and key exchange protocol for Virtual Private Networks (VPN) that are based on the IP Security Architecture (IPSec) protocols. An overview of VPN technologies based on the latest standards is provided in Part I.
This redbook also helps you understand, install and configure the most current VPN product implementations from IBM, in particular AIX, OS/400, Nways routers, OS/390, and several client and OEM platforms. After reading this redbook, you will be able to use those products to implement different VPN scenarios. An overview of the functions and configuration of the VPN components of those products is provided in Part II.
The main focus of this redbook is on how to implement complete VPN solutions using state-of-the-art VPN technlogoies, and to document IBM product interoperability. This redbook is therefore not meant to be an exhaustive VPN design guide. The authors would like to refer the reader to IBM security and network consulting services for that purpose.
This redbook is a follow-on to the VPN Vol. 1 (SG24-5201) and VPN Vol. 2 (SG24-5234) redbooks. A basic understanding of IP security and cryptographic concepts and network security policies is assumed.

A Comprehensive Guide to Virtual Private Networks, Volume II: IBM Nways Router Solutions

The Internet nowadays is not only a popular vehicle to retrieve and exchange information in traditional ways, such as e-mail, file transfer and Web surfing. It is being used more and more by companies to replace their existing telecommunications infrastructure with virtual private networks by implementing secure IP tunnels across the Internet between corporate sites as well as to business partners and remote locations.
This updated redbook includes the IPSec enhancements provided by Version 3.3 of the IBM Nways Multiprotocol Routing Services (MRS), Nways Multiprotocol Access Services (MAS) and Access Integration Services (AIS) that implement the Internet Key Exchange (IKE) protocol. This redbook also includes other new features, such as the policy engine, digital certificate and LDAP support, and QoS. The VPN scenarios are enhanced to reflect the latest implementation of IPSec and L2-tunneling functionality.
In this redbook we delve further into these scenarios by showing you how to implement solutions that exploit Data Link Switching (DLSw), IP Bridging Tunnels, Enterprise Extender (HPR over IP), APPN DLUR, TN3270, and Tunneling on layer 2 (L2TP, L2F, PPTP) through an IPSec tunnel.
A working knowledge of the IPSec protocols is assumed.

Designing A Wireless Network

By Jeffrey Wheat, Randy Hiser, Jackie Tucker, Alicia Neely and Andy McCullough

Understand How Wireless Communication Works
  • Step-by-Step Instructions for Designing a Wireless Project from Inception to Completion
  • Everything You Need to Know about Bluetooth,LMDS, 802.11, and Other Popular Standards
  • Complete Coverage of Fixed Wireless,Mobile Wireless, and Optical
    Wireless Technology

Introduction

You’ve been on an extended business trip and have spent the long hours of the flight drafting follow-up notes from your trip while connected to the airline’s onboard server. After deplaning, you walk through the gate and continue into the designated public access area. Instantly, your personal area network (PAN) device, which is clipped to your belt, beeps twice announcing that it automatically has retrieved your e-mail, voicemail, and videomail.You stop to view the videomail—a finance meeting—and also excerpts from your children’s school play.

Meanwhile, when you first walked into the public access area, your personal area network device contacted home via the Web pad on your refrigerator and posted a message to alert the family of your arrival.Your spouse will know you’ll be home from the airport shortly.

You check the shuttlebus schedule from your PAN device and catch the next convenient ride to long-term parking.You also see an e-mail from your MP3 group showing the latest selections, so you download the latest MP3 play list to listen to on the way home.

As you pass through another public access area, an e-mail comes in from your spouse.The Web pad for the refrigerator inventory has noted that you’re out of milk, so could you pick some up on the way home? You write your spouse back and say you will stop at the store.When you get to the car, you plug your PAN device into the car stereo input port.With new music playing from your car stereo’s MP3 player, you drive home, with a slight detour to buy milk at the nearest store that the car’s navigation system can find.

The minute you arrive home, your PAN device is at work, downloading information to various devices.The data stored on your PAN device is sent to your personal computer (PC) and your voicemail is sent to the Bluetooth playback unit on the telephone-answering device.The PAN device sends all video to the television, stored as personal files for playback. As you place the milk in the refrigerator, the Web pad updates to show that milk is currently in inventory and is no longer needed.The kids bring you the television remote and you check out possible movies
together to download later that night.

Click to Download

Designing a Wireless Network

Networking with z/OS and Cisco Routers: An Interoperability Guide

The increased popularity of Cisco routers has led to their ubiquitous presence within the network infrastructure of many enterprises. In such large corporations, it is also common for many applications to execute on the z/OS (formerly OS/390) platform. As a result, the interoperation of z/OS-based systems and Cisco network infrastructures is a crucial aspect of many enterprise internetworks.
This IBM Redbook provides a survey of the components necessary to achieve full interoperation between your z/OS-based servers and your Cisco IP routing environment. It may be used as a network design guide for understanding the considerations of the many aspects of interoperation. We divide this discussion into four major components:
  • The options and configuration of channel-attached Cisco routers
  • The design considerations for combining OSPF-based z/OS systems with Cisco-based EIGRP networks
  • A methodology for deploying Quality of Service policies throughout the network
  • The implementation of load balancing and high availability using Sysplex Distributor and MNLB (including new z/OS V1R2 support)

We highlight our discussion with a realistic implementation scenario and real configurations that will aid you in the deployment of these solutions. In addition, we provide in-depth discussions, traces, and traffic visualizations to show the technology at work.

Click to Download

Networking Fundamentals, v4.0

Networks are an interconnection of computers. These computers can be linked together using a wide variety of different cabling types, and for a wide variety of different purposes.
The basis reasons why computers are networked are
  • to share resources (files, printers, modems, fax machines)
  • to share application software (MS Office)
  • increase productivity (make it easier to share data amongst users)

Take for example a typical office scenario where a number of users in a small business require access to common information. As long as all user computers are connected via a network, they can share their files, exchange mail, schedule meetings, send faxes and print documents all from any point of the network.

It would not be necessary for users to transfer files via electronic mail or floppy disk, rather, each user could access all the information they require, thus leading to less wasted time and hence greater productivity.

Imagine the benefits of a user being able to directly fax the Word document they are working on, rather than print it out, then feed it into the fax machine, dial the number etc.

Small networks are often called Local Area Networks [LAN]. A LAN is a network allowing easy access to other computers or peripherals. The typical characteristics of a LAN are,

  • physically limited ( less than 2km)
  • high bandwidth (greater than 1mbps)
  • inexpensive cable media (coax or twisted pair)
  • data and hardware sharing between users
  • owned by the user

Click to Read More about networking

Wireless Network Security 802.11, Bluetooth and Handheld Devices

By Tom Karygiannis and Les Owens
Wireless communications offer organizations and users many benefits such as portability and flexibility, increased productivity, and lower installation costs. Wireless technologies cover a broad range of differing capabilities oriented toward different uses and needs. Wireless local area network (WLAN) devices, for instance, allow users to move their laptops from place to place within their offices without the need for wires and without losing network connectivity. Less wiring means greater flexibility, increased efficiency, and reduced wiring costs. Ad hoc networks, such as those enabled by Bluetooth, allow data synchronization with network systems and application sharing between devices. Bluetooth functionality also eliminates cables for printer and other peripheral device connections. Handheld devices such as personal digital assistants (PDA) and cell phones allow remote users to synchronize personal databases and provide access to network services such as wireless e-mail, Web browsing, and Internet access. Moreover, these technologies can offer dramatic cost savings and new capabilities to diverse applications ranging from retail settings to manufacturing shop floors to first responders.
However, risks are inherent in any wireless technology. Some of these risks are similar to those of wired networks; some are exacerbated by wireless connectivity; some are new. Perhaps the most significant source of risks in wireless networks is that the technology’s underlying communications medium, the airwave, is open to intruders, making it the logical equivalent of an Ethernet port in the parking lot.
The loss of confidentiality and integrity and the threat of denial of service (DoS) attacks are risks typically associated with wireless communications. Unauthorized users may gain access to agency systems and information, corrupt the agency’s data, consume network bandwidth, degrade network performance, launch attacks that prevent authorized users from accessing the network, or use agency resources to launch attacks on other networks.

A Beginner’s Guide to Network Security

An Introduction to the Key Security Issues for the E-Business Economy
With the explosion of the public Internet and e-commerce, private computers, and computer networks, if not adequately secured, are increasingly vulnerable to damaging attacks. Hackers, viruses, vindictive employees and even human error all represent clear and present dangers to networks. And all computer users, from the most casual Internet surfers to large enterprises, could be affected by network security breaches. However, security breaches can often be easily prevented. How? This guide provides you with a general overview of the most common network security threats and the steps you and your organization can take to protect yourselves from threats and ensure that the data traveling across your networks is safe.
Importance of Security
The Internet has undoubtedly become the largest public data network, enabling and facilitating both personal and business communications worldwide. The volume of traffic moving over the Internet, as well as corporate networks, is expanding exponentially every day. More and more communication is taking place via e-mail; mobile workers, telecommuters, and branch offices are using the Internet to remotely connect to their corporate networks; and commercial transactions completed over the Internet, via the World Wide Web, now account for large portions of corporate revenue.

Local Area Network Concepts and Products: Routers and Gateways

Local Area Network Concepts and Products is a set of four reference books forthose looking for conceptual and product-specific information in the LAN environment. They provide a technical introduction to the various types of IBM local area network architectures and product capabilities.
The four volumes are as follows:
SG24-4753-00 - LAN Architecture
SG24-4754-00 - LAN Adapters, Hubs and ATM
SG24-4755-00 - Routers and Gateways
SG24-4756-00 - LAN Operating Systems and Management
These redbooks complement the reference material available for the products discussed. Much of the information detailed in these books is available through current redbooks and IBM sales and reference manuals. It is therefore assumed that the reader will refer to these sources for morein-depth information if required.
These documents are intended for customers, IBM technical professionals, services specialists, marketing specialists, and marketing representatives working in networking and in particular the local area network environments.
Details on installation and performance of particular products will not be included in these books, as this information is available from other sources.
Some knowledge of local area networks, as well as an awareness of the rapidly changing intelligent workstation environment, is assumed.

Linux IPv6 HOWTO

By Peter Bieringer
IPv6 is a new layer 3 protocol (see linuxports/howto/intro_to_networking/ISO - OSI Model) which will supersede IPv4 (also known as IP). IPv4 was designed long time ago (RFC 760 / Internet Protocol from January 1980) and since its inception, there have been many requests for more addresses and enhanced capabilities. Latest RFC is RFC 2460 / Internet Protocol Version 6 Specification. Major changes in IPv6 are the redesign of the header, including the increase of address size from 32 bits to 128 bits. Because layer 3 is responsible for end-to-end packet transport using packet routing based on addresses, it must include the new IPv6 addresses (source and destination), like IPv4.
Because of lack of manpower, the IPv6 implementation in the kernel was unable to follow the discussed drafts or newly released RFCs. In October 2000, a project was started in Japan, called USAGI, whose aim was to implement all missing, or outdated IPv6 support in Linux. It tracks the current IPv6 implementation in FreeBSD made by the KAME project. From time to time they create snapshots against current vanilla Linux kernel sources.
USAGI is now making use of the new Linux kernel development series 2.5.x to insert all of their current extensions into this development release. Hopefully the 2.6.x kernel series will contain a true and up-to-date IPv6 implementation.