Tuesday, November 10, 2009

Disaster Survival Guide for Business Communication Networks: Strategies for Planning, Response and Recovery in Data and Telecom Systems, 1st edition,

Overview
Many organizations have begun to have a more focused approach to disaster planning. But in today's perilous climate which not only includes terrorist attacks, but also the usual never-ending onslaught of hackers, crackers, computer viruses, "tele-thieves," earthquakes, fires, floods, lightning strikes, and other disasters, assuring the survivability of your organization is more challenging than ever before.

This book is written specifically for IT system administrators and telecom managers who want to protect their organization's increasingly complicated and diverse telecom and datacom networks, as well as the telephone systems, Internet sites, computers, and data-laden storage devices attached to them.

This unique guide says that managers must take a new approach in protecting their assets. Instead of just implementing monitoring and detection measures with immediate intervention to combat disasters and ensure business continuity, managers should extend and intensify their security planning phase, reducing the possible magnitude of catastrophic incidents in advance by re-engineering their business to achieve a distributed, decentralized, and hardened organization. This can be done using the latest in communications technology (virtual private networks, conferencing technology, and the Internet) as well as networking and managing a new generation of the traditional "uptime" technologies (fault tolerant computer telephony, convergence, and power supply systems).

With such a strategy, network and system administrators can now defend not just their telecom, information, and computing assets to the fullest extent possible, but can equally safeguard their most important asset - their employees.

And whether it's identifying how hackers and crackers find vulnerabilities in your system, how unnoticeable levels of static electricity can shorten the life of electronic components, or how a centralized workforce can cripple recovery efforts, this book examines the full range of possible security and infrastructure weaknesses that can threaten a business, then helps you formulate and execute commonsense practices to counter them.

Whatever your role in preserving your organization's operating functionality, network infrastructure, and data integrity, this book is a "must have" you'll find yourself referring to time and again.



Source: www.the-resource-center.com/BOOKS/Networking/Disaster_survival.HTM

The Call Heard Round The World: Voice Over Internet Protocol And The Quest For Convergence

Overview
The Internet still has many great applications left up its sleeve. Chief among them is the seamless integration of voice transmission with text, audio, video, and graphics - all delivered simultaneously through the high bandwidth of the Internet.

As Chief Operating Officer of Net2Phone, author David Greenblatt guided the company to a 40% share of this snowballing market, known interchangeably as VoIP (Voice over Internet Protocol) or IP telephony. With sister company Adir Technologies, Greenblatt leveraged his knowledge and vision in collaborations with powerhouse companies like Cisco, Yahoo!, AOL, RealNetworks, and Microsoft.

The evolution of Net2Phone and Adir forms the framework for a revealing look at the immediate and long-range future of communication and information delivery. The Call Heard 'Round the World describes the evolution-in-progress of VoIP technology, from its current applications (it powers each of the untold millions of instant messages sent over AOL, MSN, and Yahoo!) to the impact it will have on business models and investment strategy in a broad range of industries. In addition to the author's own experiences and perspectives, the book features observations from respected analysts, investment professionals, business leaders, and industry experts.

The first-of-its kind look at the vast potential of IP telephony illustrates the long-term commitment that both technology companies and service providers have given to bringing convergent solutions to market. It also proves beyond doubt that Internet technology is once again poised to change-for the better-the way we live and work.

This unprecedented "insider's view" gives you an in-depth look at:

VoIP technology from both an industry and market perspective
The crucial roles played by Net2Phone and Adir in the development of convergent solutions and services
How VoIP is playing out in financial markets and the business world
The huge impact that VoIP has had and will have-on telecommunications companies
How cable is driving the convergence movement and bringing more and better services to homes and businesses
And much more


Source: www.the-resource-center.com/BOOKS/Telecom/call_heard_round_world.HTM

PBX Systems For IP Telephony: Migrating Enterprise Communications, by Allan Sulkin, softcover, 487 pages, 2002, $59.95

Overview
The most efficient (and economical) ways to bring enterprise communication systems into the Digital Age are in this guide, written by the foremost analyst in the market space. In PBX Systems for IP Telephony, Allan Sulkin-consultant and advisor to Avaya, Siemens, Cisco, NEC, Alcatel and other world-class companies-evaluates technologies, markets, and best practices for enterprise voice systems, messaging, and customer contact centers.

The heart and brains of your communications network, the PBX (Private Branch Exchange) can be the vital link-or the missing link-that interfaces businesses and their customers. This guide, from the recognized expert in telephony systems, provides answers. Whether you need to IP-enable a PBX system for a small business, make complex choices for an advanced call center, or gain the expertise to integrate a variety of communication systems into a state-of-the-art foundation for your e-business vision, PBX Systems for IP Telephony should be your first choice. Here's why:


No one knows PBX systems and markets better than the author, and no one is better at explaining them.
This comprehensive resource supplies nuts-and-bolts information on costs, performance, risks, and other real-world considerations difficult to research.
You get insights into the potential strengths and weaknesses of next-generation PBX systems.
You'll consult the consultant to the system designers for practical advice on systems that fit your needs and your future.
There's no more business-aware or user- friendly guide anywhere to converging your voice systems with your IP-based data systems.
When it comes to the PBX, the question often seems to be "Whose job is it anyway?" With this guidebook, you'll be ready to take the responsibility - and get the credit



Source: www.the-resource-center.com/BOOKS/Telecom/pbx_systems.HTM

The Telecom Tutorials: A Practical Guide for Managing Business Telecommunications Resources, 2nd edition, by Jane Laino, 286 pages, 2001, $24.95

Overview
If you're asking yourself any of the following questions, this book's for you!

What's the best approach to purchasing a telephone system or local and long distance telephone service? Where are the pitfalls?
How can we reduce and control our telecommunications expenses?
How can we improve our organization's telephone service to callers and staff?
How well can our telephone systems and services accommodate our changing business requirements?
How can we understand 'who does what' in the telecommunications industry?
Jane Laino has worked on the answers to these questions for over 25 years. In her comfortable style, she takes you through these 35 tutorials using clear, non-technical language that is easy to understand. You'll pick up tips you can use right away and others to mentally file away until you need them. One fan of Jane's tutorials writes, "I read so much clap trap, it was a pleasant surprise to see your tutorial make a complex subject seem simple and understandable."



Source: www.the-resource-center.com/BOOKS/Telecom/telecom_tutorials.HTM

Communications Systems and Networks by Ray Horak

An excellent technical overview supported by brief historical context about communications technologies from the telegraph to the present-day of dsl, satellite, cell phones, pagers, phones, etc.. - The Digest Editors

Amazon: $34.99
Barnes & Noble: $39.99
Powell's: $49.99


Source: www.telecombookshelf.com/technology.html

Information Warfare and Cyber Security

Author(s) : R.C. Mishra
Format : Hardcover
ISBN-13 : 9788172731434
ISBN-10 : 8172731434
Pages : viii+218p., Figures; References; Bibliography; Index; 23cm.
Pub. Date : 01 Jan 2009 , Reprint.
Publisher : Authors Press
Language(s) : English
Bagchee ID : BB12289


Source: www.bagchee.com/en/books/view/12289/information_warfare_and_cyber_security

A Human Silicon Chip

Author(s) : Nishi Chawla
Format : Hardcover
ISBN-10 : 812410980X
Pages : 150p., 23cm.
Pub. Date : 01 Jan 2003 , 1sst ed.
Publisher : Har Anand Publications Pvt. Ltd.
Language(s) : English
Bagchee ID : BB11733

Source: www.bagchee.com/en/books/view/11733/a_human_silicon_chip

E-Mail Hacking: Even You Can Hack!

Author(s) : Ankit Fadia
Format : Softcover
ISBN-13 : 9788125918134
Pages : viii+101p., Maps; Tables; Figures; 22cm.
Pub. Date : 01 Jan 2006 , 1st ed.
Publisher : Vikas Publishing House Pvt. Ltd.
Language(s) : English
Bagchee ID : BB25867


Source: www.bagchee.com/en/books/view/25867/e_mail_hacking_even_you_can_hack

The Age of Nanotechnology

Author(s) : Nirmala Rao Khadpekar (ed.)
Format : Softcover
ISBN-13 : 9788131408285
Pages : xiv+238p., Figures; B/w Plates; Index; 24cm.
Pub. Date : 08 Nov 2007 , 1st ed.
Publisher : The ICFAI University Press
Language(s) : English
Bagchee ID : BB51974


Source: www.bagchee.com/en/books/view/51974/the_age_of_nanotechnology

Neural Network and Fuzzy Logic

Author(s) : S.K. Dass
Format : Hardcover
ISBN-13 : 9788183291101
Pages : viii+245p., Tables; Figures; Bilbiography; Index; 23cm.
Pub. Date : 01 Jan 2006 , 1st ed.
Publisher : Shree Publishers & Distributors
Language(s) : English
Bagchee ID : BB26420


Source: www.bagchee.com/en/books/view/26420/neural_network_and_fuzzy_logic

Telecom Technologies: Emerging Trends

Author(s) : B.Ravi Kumar Jain , S.C. Latha
Format : Hardcover
ISBN-10 : 813140403X
Pages : 344p.
Pub. Date : 08 Nov 2006 , 1st ed.
Publisher : The ICFAI University Press
Language(s) : English
Bagchee ID : BB52002


Source: www.bagchee.com/en/books/view/52002/telecom_technologies_emerging_trends

Technical Solutions for Business

At Integra, we not only respect our customers, but also realize that they form an integral part of our business. We have extensive customer interaction right from the project initiation stage to understand their business needs better. Our 'Technology Solutions for Business team identifies each business individually and understands its needs and requirements, and more importantly, proposes a personalized solution to suit its needs.

Our team is experienced in the latest technological advancements and blends them with integral business practices to create solutions for our customers.

To develop applications for various customers, in concurrence with their requirement, we use conventional methodologies such as the spiral model and also use i-Fi®, Integra’s exclusive methodology.

We offer services related to application software development in the following areas:


• Business Application Development
• Reengineering, Adaptation, Porting and Migration of applications
• Software Maintenance
• Internet/Intranet solutions as value creators for businesses
• Application Testing


Our technological expertise extends to various latest technologies like:


• Java/J2EE
• MicrosoftTechnologies
• COBOL, JCL
• MySQL, Oracle, MS SQL, PostgreSQL
• Web Services: .Net, AXIS
• Workflow solutions: MS BizTalk
• SOA Technologies


Finance and billing, banking applications and industrial automation are our key service domains in addition to telecom, document management and content management.


Source: www.integramicro.com/IMSS/techsolutions.htm

Imaging and Networking Services

Integra has a long track record of building solutions around document imaging technology. Since the early-90s, Integra has been involved in several high profile projects requiring processing of very large volumes of digital images using limited computing resources. The election ID card generation system is one example of such a system that captured data and generated over 80 million identity cards for the Election Commission of India.

Storage and retrieval of digitized signatures as a part of a bank account information is another area in which Integra has remained a leader. The ScanCom product has become the de facto standard for enabling terminal-based applications to display digital images along with the corresponding bank account information. The IDEA imaging toolkit was developed specifically for this purpose, and is now used in a variety of other applications that require image processing capability.

The expertise built up in imaging is of great relevance in today's world, where records and historical documents of many types are being converted into digital form. Managing such documents is another Herculean task, considering the volume of data and the need to be able to retrieve a document quickly. Integra has successfully developed and delivered jukebox management software, integrated with a document management system to address these needs.

Integra has thus consistently managed to come up with the right solutions to address the problems of a broad spectrum of businesses; those that need efficient and cost-effective systems to manage huge volumes of digital images and documents.

With this vast array of experience in the industry, Integra offers specialized services in software design, development and delivery to address document imaging and management needs of large businesses. Integra's services are sought after in situations where the available computing power is overwhelmed by the sheer volume of data; when producers and consumers of this data need new and innovative solutions to be able to continue to work with the data within the constraints of the existing technology.


Source: www.integramicro.com/IMSS/imagingnetworking.htm

Convergence Technology Services

Better speed, lower power consumption and enhanced functionality are the new watchwords of embedded systems in the current market scenario. Entertainment, information and communication are the essential features of any device in today’s world. Convergence technology has revolutionized the digital world and made it easier to work with embedded devices.
Integra visualizes this outlook and strives to build competence in the convergence space for the new generation applications that would play a vital part in the future market. Convergence in Integra emphasizes on the latest developments in the technology scenario and expertise in handling various embedded devices.

The technology focus of Integra in the convergence sector is in the areas of VoIP, IPTV, WiMAX, Biometrics, RFID, SIP, RTP and related protocols. Integra’s strong product development background and a strong understanding of networking and IP related protocols have backed its commitment towards convergence. Anticipating the market growth for convergence, Integra has initiated competency building in IP-based home devices in the areas of VoIP and IPTV. Integra’s aim is to build competency and develop competency in these areas, and associate with companies who are in the space of convergence and triple-play.

Integra strives to create devices and services that are

Performance enhanced
Reliable
Functional
Effective power consuming
Feature-rich
Less expensive
Current services in the convergence unit range from concept to product design, platform development, OS porting and consulting services provided in the consumer electronics domain in the areas of product design, application development and testing.

Source: www.integramicro.com/IMSS/convergencetech.htm

Telecom Technology Services

Telecom technology has been one of Integra’s focus areas right from its inception. Integra has years of experience in developing and maintaining telecom middleware solutions and solutions such as Push-To-Talk (PTT), Net Dispatch Messenger, Application Server Test Tool Framework and development for SIP and CDMA, applications for mobile phones and PDAs.

Integra also has the expertise to handle testing assignments, both onsite and offsite. Projects involving sub-system testing for CDMA infrastructure, testing and certification of GSM, GPRS, CDMA and 3G-based mobile handset for global market, system and usability testing of different features of mobile handsets, developing and maintaining test cases, testing the accessories of mobile handsets, and string validation in various languages are a few examples to cite. Integra has emerged as an expert in the areas of sanity/smoke testing, functional (features) and usability (feature interaction) testing, stress testing, regression testing, acceptance testing and creative testing, and has a total experience of testing of more than 50 models of mobile handsets of some of the major developers and marketers of mobile handsets.

Integra believes in testing a product from the user's perspective and lays emphasis upon creative testing wherein engineers are encouraged to think beyond the test cases so as to track hidden bugs in the device. The efforts put in this direction have helped our customers minimize the instances of product recall and this has not only boosted our confidence, but has also helped us become the preferred partners for our valued clients.

Integra has the experience and expertise to take up the integration of third-party software for the infrastructure as well as the terminal (handset) and provide end-to-end services to its clients. Integra also specializes in setting up offshore development and testing centres, and has been providing such services to some of the big players in the telecom domain.



Source: www.integramicro.com/IMSS/telecomtech.htm

The Evolution of Telecom Technologies: Current Trends and Near-Future Implications

A number of case studies of developments in mobile and wireless telephony across the Irish border from a research team led by two of Ireland's leading specialists in information retrieval, data analysis and image and signal processing: Professor Fionn Murtagh of Queen's University Belfast and Dr John Keating of National University of Ireland Maynooth. The project was sponsored by eircom. Among the project's outcomes are:

The first comprehensive analysis of cross-border, 'roaming' and other mobile phone charges in Northern Ireland and the Republic of Ireland
The creation of a unique online system - www.B4Ucall.com - to allow consumers to monitor the cost of mobile phone calls on the island of Ireland.
A study of the benefits ( including cost savings) of developing telecardiology services throughout the island, thus facilitating remote diagnosis and therapy delivery for geographically remote patients and their GP's
An outline study of low cost, cross-border video-conferencing


Source: www.crossborder.ie/research/telecomtechhome.php

SS7 Over IP

Service providers can cut costs with SS7oIP by offloading data traffic from SS7 networks onto IP networks. For example, Short Message Service (SMS) data is saturating GSM service providers' SS7 networks. SS7 Over IP enables wireless service providers to rapidly deploy emerging IP-based services for the mobile Internet that freely interact with the legacy mobile infrastructure.

SIGTRAN is the name given to an IETF working group that produced specifications for a family of protocols that provide reliable datagram service and user layer adaptations for SS7 and ISDN communications protocols . The most significant protocol defined by the SIGTRAN group was the Stream Control Transmission Protocol ( SCTP ) which uses the Internet Protocol (IP) as its network protocol.

SCTP is a reliable transport protocol operating on top of a potentially unreliable connectionless packet service such as IP. It offers acknowledged error-free non-duplicated transfer of datagrams (messages). Detection of data corruption, loss of data and duplication of data is achieved by using checksums and sequence numbers. A selective retransmission mechanism is applied to correct loss or corruption of data.



Benefits:

Ease of deployment: When using signaling gateways (such as access service group [ASG]), there is no need to disrupt the existing SS7 network, and future enhancements are transparent.

Less costly equipment: There is no need for further expensive investments in the legacy signaling elements.

Better efficiency: SIGTRAN over an IP network doesn't require the physical E1/T1 over synchronous digital hierarchy (SDH) rings. Using new technologies like IP over SDH and IP over fiber, for instance, can achieve much higher throughput.

Higher bandwidth: SIGTRAN information over IP does not constrain to link capacity as it does in the SS7 network. The IP network is much more flexible than the TDM-based legacy network.

Enhanced services: Implementing a core IP network facilitates a variety of new solutions and value-added services (VAS).



RFCs

RFC 3286 -- An Introduction to the Stream Control Transmission Protocol

RFC 3257 -- Stream Control Transmission Protocol Applicability Statement

RFC 2960 -- Stream Control Transmission Protocol

RFC 3873 --Stream Control Transmission Protocol (SCTP) Management Information Base (MIB)

RFC 3758 -- Stream Control Transmission Protocol (SCTP) Partial Reliability Extension

RFC 3436 -- Transport Layer Security over Stream Control Transmission Protocol


Source: www.telecomspace.com/interworking-ss7oip.html

Intelligent Network Application Part (INAP)

Intelligent Network Application Part (INAP) is the signaling protocol used in Intelligent Networking. Developed by the International Telecommunications Union (ITU), IN is recognized as a global standard. Within the International Telecommunications Union, a total functionality of the IN has been defined and implemented in digestible segments called capability sets. The first version to be released was Capability Set 1 (CS-1). Currently CS-2 is defined and available. The CAMEL Application Part (CAP) is a derivative of INAP and enables the use of INAP in mobile GSM networks.

INAP is a signaling protocol between a service switching point (SSP), network media resources (intelligent peripherals), and a centralized network database called a service control point (SCP). The SCP consists of operator or 3rd party derived service logic programs and data.

Service Switching Point (SSP) is a physical entity in the Intelligent Network that provides the switching functionality. SSP the point of subscription for the service user, and is responsible for detecting special conditions during call processing that cause a query for instructions to be issued to the SCP.

The SSP contains Detection Capability to detect requests for IN services. It also contains capabilities to communicate with other physical entities containing SCF, such as SCP, and to respond to instructions from the other physical entities. Functionally, an SSP contains a Call Control Function, a Service Switching Function, and, if the SSP is a local exchange, a Call Control Agent Function. It also may optionally contain Service Control Function, and/or a Specialized Resource Function, and/or a Service Data Function. The SSP may provide IN services to users connected to subtending Network Access Points.

The SSP is usually provided by the traditional switch manufacturers. These switches are programmable and they can be implemented using multipurpose processors. The main difference of SSP from an ordinary switch is in the software where the service control of IN is separated from the basic call control.

Service Control Point (SCP) validates and authenticates information from the service user, processing requests from the SSP and issuing responses.The SCP stores the service provider instructions and data that direct switch processing and provide call control. At predefined points during processing an incoming or outgoing call, the switch suspends what it is doing, packages up information it has regarding the processing of the call, and queries the SCP for further instruction. The SCP executes user-defined programs that analyze the current state of the call and the information received from the switch. The programs can then modify or create the call data that is sent back to the switch. The switch then analyzes the information received from the SCP and follows the provided instruction to further process the call.

Functionally, an SCP contains Service Control Function (SCF) and optionally also Service Data Function (SDF). The SCF is implemented in Service Logic Programs (SLP). The SCP is connected to SSPs by a signalling network. Multiple SCPs may contain the same SLPs and data to improve service reliability and to facilitate load sharing between SCPs. In case of external Service Data Point (SDP) the SCF can access data through a signalling network. The SDP may be in the same network as the SCP, or in another network. The SCP can be connected to SSPs, and optionally to IPs, through the signalling network. The SCP can also be connected to an IP via an SSP relay function. The SCP comprises the SCP node, the SCP platform, and applications. The node performs functions common to applications, or independent of any application; it provides all functions for handling service-related, administrative, and network messages. These functions include message discrimination, distribution, routing, and network management and testing. For example, when the SCP node receives a service-related message, it distributes the incoming message to the proper application. In turn, the application issues a response message to the node, which routes it to the appropriate network elements. The SCP node gathers data on all incoming and outgoing messages to assist in network administration and cost allocation. This data is collected at the node, and transmitted to an administrative system for processing.

Intelligent Peripheral (IP) provides resources such as customized and concatenated voice announcements, voice recognition, and Dual Tone Multi-Frequencies (DTMF) digit collection, and contains switching matrix to connect users to these resources. The IP supports flexible information interactions between a user and the network. Functionally, the IP contains the Special Resource Function. The IP may directly connect to one or more SSPs, and/or may connect to the signalling network.
Service Management Point (SMP) performs service management control, service provision control, and service deployment control. Examples of functions it can perform are database administration, network surveillance and testing, network traffic management, and network data collection. Functionally, the SMP contains the Service Management Function and, optionally, the Service Management Access Function and the Service Creation Environment
Function. The SMP can access all other Physical Entities.
Conceptual model of the Intelligent Network :

The IN standards present a conceptual model of the Intelligent Network that model and abstract the IN functionality in four planes:

The Service Plane (SP): This plane is of primary interest to service users and providers. It describes services and service features from a user perspective, and is not concerned with how the services are implemented within the network.
The Global Functional Plane (GFP): The GFP is of primary interest to the service designer. It describes units of functionality, known as service independent building blocks (SIBs) and it is not concerned with how the functionality is distributed in the network. Services and service features can be realised in the service plane by combining SIBs in the GFP.
The Distributed Functional Plane (DFP): This plane is of primary interest to network providers and designers. It defines the functional architecture of an IN-structured network in terms of network functionality, known as functional entities (FEs). SIBs in the GFP are realised in the DFP by a sequence of functional entity actions (FEAs) and their resulting information flows.
The Physical Plane (PP): Real view of the physical network.The PP is of primary interest to equipment providers. It describes the physical architecture for an IN-structured network in terms of physical entities (PEs) and the interfaces between them. The functional entities from the DFP are realised by physical entities in the physical plane.
Services that can be defined with INAP include:

Single number service: one number reaches a local number associated with the service
Personal access service: provide end user management of incoming calls
Disaster recovery service: define backup call destinations in case of disaster
Do not disturb service: call forward
Virtual private network short digit extension dialing service
Advantages created by the IN architecture:

extensive use of information processing techniques;
efficient use of network resources;
modularization of network functions;
integrated service creation and implementation by means of reusable standard network functions;
flexible allocation of network functions to physical entities;
portability of network functions among physical entities;
standardised communication between network functions via service independent interfaces;
customer control over their specific service attributes;
standardised management of service logic.


Source: www.telecomspace.com/ss7-in.html

Mobile Application Part (MAP)

Mobile Application Part (MAP) messages sent between mobile switches and databases to support user authentication, equipment identification, and roaming are carried by TCAP. In mobile networks (IS-41 and GSM) when a mobile subscriber roams into a new mobile switching center (MSC) area, the integrated visitor location register requests service profile information from the subscriber's home location register (HLR) using MAP (mobile application part) information carried within TCAP messages.

The Mobile Application Part (MAP), one of protocols in the SS7 suite, allows for the implementation of mobile network (GSM) signaling infrastructure. The premise behind MAP is to connect the distributed switching elements, called mobile switching centers (MSCs) with a master database called the Home Location Register (HLR). The HLR dynamically stores the current location and profile of a mobile network subscriber. The HLR is consulted during the processing of an incoming call. Conversely, the HLR is updated as the subscriber moves about the network and is thus serviced by different switches within the network.

MAP has been evolving as wireless networks grow, from supporting strictly voice, to supporting packet data services as well. The fact that MAP is used to connect NexGen elements such as the Gateway GPRS Support node (GGSN) and Serving Gateway Support Node (SGSN) is a testament to the sound design of the GSM signaling system.

MAP has several basic functions:

* Mechanism for a Gateway-MSC (GMSC) to obtain a routing number for an incoming call

* Mechanism for an MSC via integrated Visitor Location Register (VLR) to update subscriber status and routing number.

* Subscriber CAMEL trigger data to switching elements via the VLR

* Subscriber supplementary service profile and data to switching elements via the VLR.


Source: www.telecomspace.com/ss7-map.html

ISDN User Part (ISUP)

ISUP (ISDN User Part) defines the messages and protocol used in the establishment and tear down of voice and data calls over the public switched telephone network (PSTN), and to manage the trunk network on which they rely. Despite its name, ISUP is used for both ISDN and non–ISDN calls. In the North American version of SS7, ISUP messages rely exclusively on MTP to transport messages between concerned nodes.

ISUP controls the circuits used to carry either voice or data traffic. In addition, the state of circuits can be verified and managed using ISUP. The management of the circuit infrastructure can occur both at the individual circuit level and for groups of circuits.

Services that can be defined using ISUP include: Switching, Voice mail, Internet offload. ISUP is ideal for applications such as switching and voice mail in which calls are routed between endpoints.

When used in conjunction with TCAP and SIGTRAN, ISUP becomes an enabler for Internet offload solutions in which Internet sessions of relatively long duration can be isolated from relatively brief phone conversations.

A simple call flow using ISUP signaling is as follows:

Call set up: When a call is placed to an out-of-switch number, the originating SSP transmits an ISUP initial address message (IAM) to reserve an idle trunk circuit from the originating switch to the destination switch. The destination switch rings the called party line if the line is available and transmits an ISUP address complete message (ACM) to the originating switch to indicate that the remote end of the trunk circuit has been reserved. The STP routes the ACM to the originating switch which rings the calling party's line and connects it to the trunk to complete the voice circuit from the calling party to the called party.

Call connection: When the called party picks up the phone, the destination switch terminates the ringing tone and transmits an ISUP answer message (ANM) to the originating switch via its home STP. The STP routes the ANM to the originating switch which verifies that the calling party's line is connected to the reserved trunk and, if so, initiates billing.

Call tear down: If the calling party hangs-up first, the originating switch sends an ISUP release message (REL) to release the trunk circuit between the switches. The STP routes the REL to the destination switch. If the called party hangs up first, or if the line is busy, the destination switch sends an REL to the originating switch indicating the release cause (e.g., normal release or busy). Upon receiving the REL, the destination switch disconnects the trunk from the called party's line, sets the trunk state to idle, and transmits an ISUP release complete message (RLC) to the originating switch to acknowledge the release of the remote end of the trunk circuit. When the originating switch receives (or generates) the RLC, it terminates the billing cycle and sets the trunk state to idle in preparation for the next call.


Source: www.telecomspace.com/ss7-isup.html

Transaction Capabilities Application Part (TCAP)

Transaction Capabilities Application Part (TCAP) defines the messages and protocol used to communicate between applications (deployed as subsystems) in nodes. It is used for database services such as calling card, 800, and AIN as well as switch-to-switch services including repeat dialing and call return. Because TCAP messages must be delivered to individual applications within the nodes they address, they use the SCCP for transport.

TCAP enables the deployment of advanced intelligent network services by supporting non-circuit related information exchange between signalling points using the SCCP connectionless service. TCAP messages are contained within the SCCP portion of an MSU. A TCAP message is comprised of a transaction portion and a component portion.

TCAP supports the exchange of non-circuit related data between applications across the SS7 network using the SCCP connectionless service. Queries and responses sent between SSPs and SCPs are carried in TCAP messages. For example, an SSP sends a TCAP query to determine the routing number associated with a dialed 800/888 number and to to check the personal identification number (PIN) of a calling card user. In mobile networks (IS-41 and GSM), TCAP carries Mobile Application Part (MAP) messages sent between mobile switches and databases to support user authentication, equipment identification, and roaming.


Source: www.telecomspace.com/ss7-tcap.html

Signaling Connection Control Part (SCCP)

The Signaling Connection Control Part (SCCP) layer of the SS7 stack provides provides connectionless and connection-oriented network services and global title translation (GTT) capabilities above MTP Level 3. SCCP is used as the transport layer for TCAP-based services. It offers both Class 0 (Basic) and Class 1 (Sequenced) connectionless services. SCCP also provides Class 2 (connection oriented) services, which are typically used by Base Station System Application Part, Location Services Extension (BSSAP-LE). In addition, SCCP provides Global Title Translation (GTT) functionality.

The signaling connection control part (SCCP) provides two major functions that are lacking in the MTP. The first of these is the capability to address applications within a signaling point. The MTP can only receive and deliver messages from a node as a whole; it does not deal with software applications within a node.

While MTP network-management messages and basic call-setup messages are addressed to a node as a whole, other messages are used by separate applications (referred to as subsystems) within a node. Examples of subsystems are 800 call processing, calling-card processing, advanced intelligent network (AIN), and custom local-area signaling services (CLASS) services (e.g., repeat dialing and call return). The SCCP allows these subsystems to be addressed explicitly.

The signaling connection control part (SCCP) provides two major functions that are lacking in the MTP. The first of these is the capability to address applications within a signaling point. The MTP can only receive and deliver messages from a node as a whole; it does not deal with software applications within a node.

While MTP network-management messages and basic call-setup messages are addressed to a node as a whole, other messages are used by separate applications (referred to as subsystems) within a node. Examples of subsystems are 800 call processing, calling-card processing, advanced intelligent network (AIN), and custom local-area signaling services (CLASS) services (e.g., repeat dialing and call return). The SCCP allows these subsystems to be addressed explicitly.

The second function provided by the SCCP is Global Title translation, the ability to perform incremental routing using a capability called global title translation (GTT). GTT frees originating signaling points from the burden of having to know every potential destination to which they might have to route a message. A switch can originate a query, for example, and address it to an STP along with a request for GTT. The receiving STP can then examine a portion of the message, make a determination as to where the message should be routed, and then route it.

For example, calling-card queries (used to verify that a call can be properly billed to a calling card) must be routed to an SCP designated by the company that issued the calling card. Rather than maintaining a nationwide database of where such queries should be routed (based on the calling-card number), switches generate queries addressed to their local STPs, which, using GTT, select the correct destination to which the message should be routed. Note that there is no magic here; STPs must maintain a database that enables them to determine where a query should be routed. GTT effectively centralizes the problem and places it in a node (the STP) that has been designed to perform this function.

In performing GTT, an STP does not need to know the exact final destination of a message. It can, instead, perform intermediate GTT, in which it uses its tables to find another STP further along the route to the destination. That STP, in turn, can perform final GTT, routing the message to its actual destination.

Intermediate GTT minimizes the need for STPs to maintain extensive information about nodes that are far removed from them. GTT also is used at the STP to share load among mated SCPs in both normal and failure scenarios. In these instances, when messages arrive at an STP for final GTT and routing to a database, the STP can select from among available redundant SCPs. It can select an SCP on either a priority basis (referred to as primary backup) or so as to equalize the load across all available SCPs (referred to as load sharing).


Source: www.telecomspace.com/ss7-sccp.html

Message Transfer Part (MTP)

The Message Transfer Part (MTP) layer of the SS7 protocol provides the routing and network interface capabilities that support SCCP, TCAP, and ISUP. Message Transfer part (MTP) is divided into three levels.

MTP Level 1 (Physical layer) defines the physical, electrical, and functional characteristics of the digital signaling link. Physical interfaces defined include E-1 (2048 kb/s; 32 64 kb/s channels), DS-1 (1544 kb/s; 24 64 kp/s channels), V.35 (64 kb/s), DS-0 (64 kb/s), and DS-0A (56 kb/s).

MTP Level 2 provides the reliability aspects of MTP including error monitoring and recovery. (MTP-2) is a signalling link which together with MTP-3 provides reliable transfer of signalling messages between two directly connected signalling points.

MTP Level 3 provides the link, route, and traffic management aspects of MTP. MTP 3, thus ensures reliable transfer of the signalling messages, even in the case of the failure of the signalling links and signalling transfer points. The protocol therefore includes the appropriate functions and procedures necessary both to inform the remote parts of the signalling network of the consequences of a fault, and appropriately reconfigure the routing of messages through the signalling network.




Source: www.telecomspace.com/ss7-mtp.html

Signalling System #7 (SS7)

There are two essential components to all telephone calls. The first, and most obvious, is the actual content—our voices, faxes, modem data, etc. The second is the information that instructs telephone exchanges to establish connections and route the “content” to an appropriate destination. Telephony signaling is concerned with the creation of standards for the latter to achieve the former. These standards are known as protocols. SS7 or Signaling System Number 7 is simply another set of protocols that describe a means of communication between telephone switches in public telephone networks. They have been created and controlled by various bodies around the world, which leads to some specific local variations, but the principal organization with responsibility for their administration is the International Telecommunications Union or ITU-T.

Signalling System Number 7 (SS#7 or C7) is the protocol used by the telephone companies for interoffice signalling. In the past, in-band signalling techniques were used on interoffice trunks. This method of signalling used the same physical path for both the call-control signalling and the actual connected call. This method of signalling is inefficient and is rapidly being replaced by out-of-band or common-channel signalling techniques.

To understand SS7 we must first understand something of the basic inefficiency of previous signaling methods utilized in the Public Switched Telephone Network (PSTN). Until relatively recently, all telephone connections were managed by a variety of techniques centered on “in band” signaling.

A network utilizing common-channel signalling is actually two networks in one:

1. First there is the circuit-switched "user" network which actually carries the user voice and data traffic. It provides a physical path between the source and destination.
2. The second is the signalling network which carries the call control traffic. It is a packet-switched network using a common channel switching protocol.

The original common channel interoffice signalling protocols were based on Signalling System Number 6 (SS#6). Today SS#7 is being used in new installations worldwide. SS#7 is the defined interoffice signalling protocol for ISDN. It is also in common use today outside of the ISDN environment.

The primary function of SS#7 is to provide call control, remote network management, and maintenance capabilities for the inter- office telephone network. SS#7 performs these functions by exchanging control messages between SS#7 telephone exchanges (signalling points or SPs) and SS#7 signalling transfer points (STPs).
The switching offices (SPs) handle the SS#7 control network as well as the user circuit-switched network. Basically, the SS#7 control network tells the switching office which paths to establish over the circuit-switched network. The STPs route SS#7 control packets across the signalling network. A switching office may or may not be an STP.

SS7 Protocol layers:

The SS7 network is an interconnected set of network elements that is used to exchange messages in support of telecommunications functions. The SS7 protocol is designed to both facilitate these functions and to maintain the network over which they are provided. Like most modern protocols, the SS7 protocol is layered.


Source: www.telecomspace.com/ss7.html

Monday, November 9, 2009

Premises Cabling Systems (Fiber, Copper and Wireless) (CPCT Level)

Overview of Premises Cabling and Standards
Jargon
Networks
Design, New T-568-C Nomenclature
UTP Cables
Terminations , UTP Termination (Tutorial)
UTP Installation VHO 66 Block, 110 Block, Jacks, Plugs
UTP Testing
Coax Cable VHO Coax Termination
Fiber Optics in Premises Cabling
Wireless
Glossary


Source: www.thefoa.org/tech/ref/contents.html#Premises

Using fiber optic systems

User's Guide To Fiber Optic Networks
Choosing, Installing and Using Fiber Optic Products For Users (TT, FOA Tech Bulletin, PDF 0.1 MB)
Maintenance
Restoration (planning & implementing)
Upgrading


Source: www.thefoa.org/tech/ref/contents.html#User

Testing & Troubleshooting Fiber Optic Systems

Fiber Optic Test Instruments
Visual tracing & fault location
Measuring Optical Power, Units of Measure (dB, dBm)
Fiber Loss Testing and Multimode Modal Control
Modal Distribution Effects on Multimode Fiber and Cables Measurements
Reference Cables
Special Applications/Hybrid Cables
Mismatched Multimode Fiber Losses
Installed Cable Plant Testing (OFSTP-14) VHO Insertion loss testing, What Loss Should You Expect?
Patchcord or Single Cable Testing (FOTP-171)
Testing cables with different types of connectors.
Loss by Cable Substitution - when other methods will not work
Connector and Splice Loss Testing (FOTP-34)
Data Link or Network Testing
OTDR testing VHO Using an OTDR
Microscope Inspection of Connectors
Testing long haul networks (CD, PMD)


Source :www.thefoa.org/tech/ref/contents.html#Test

Installation of FO Cable Plants

Installing Fiber Optic Cable Plants - FOA Tech Bulletin, basis of the NECA/FOA-301 Installation Standard (PDF 0.2 MB) (TT)
Getting Training
Installation Checklist - step by step installation planning
The Role of the Contractor
Planning the Installation
Safety procedures (TT)
Tools and Equipment
Cleaning Fiber Optic Connections (TT)
Receiving cable and components onsite
Installing Cable - General Guidelines
Installing a Swivel Pulling Eye on Cable

Source: www.thefoa.org/tech/ref/contents.html#Install

Designing Fiber Optic Networks

Designing Fiber Optic Networks, a reference guide for users, contractors, installers, etc. (PDF, 1.3 MB)(TT)
Network User's Guide, covers design through maintenance. (PDF 0.1 MB)(TT)
Specifications for fiber optic LANs and Links (TT)
Link Loss Budgets (TT)
Fiber or Copper? Overview and Fiber LANs (TT)
FTTx Online Tutorial (TT)
Estimating Fiber Optic Installations (TT)
Mismatched Multimode Fiber Losses
Planning For Restoration (PDF, 0.1 MB) (TT)


Source: www.thefoa.org/tech/ref/contents.html#Design

Fiber Optic Components

Optical Fiber
Singlemode Fiber Nomenclature
Multimode Premises Cable Plant Nomenclature (OM1, OM2, OM3, etc.)
Plastic Optical Fiber (POF)
How Optical Fiber Is Manufactured




Source: www.thefoa.org/tech/ref/contents.html#Components

Fiber Optic Technology and Standards

Wavelengths of Light Used In Fiber Optics
Wavelength-Division Multiplexing
How Fiber Amplifiers Work
TIA/EIA Fiber Optic Standards



Source: www.thefoa.org/tech/ref/contents.html#Tech-Standards

Applications of Fiber Optics

Communications
Fiber Optic Datalinks
Fiber Optic Transceivers for Datalinks
Telephone, long haul, metropolitan
FTTH, FTTH Architectures, FTTH PON Protocols
Internet
CATV
Premises Networks, LANs, Section on Premises Networks
CCTV for surveillance
Industrial, building automation
Consumer entertainment
Supporting wireless
Military
Media Conversion: Converting copper or wireless to fiber, etc.
Attenuating Power In Overloaded Datalinks

Non-Communications Applications
Sensors
Fiber Optic Lighting (TT)
Inspection/ Viewing Using Fiber Optics
Sensors

Source: www.thefoa.org/tech/ref/contents.html#Applications

FOA Approved Training Programs

Updated October 29, 2009

Over 27,000 students have become CFOT® certified through FOA-Approved Schools!



The Fiber Optic Association has developed guidelines for training course approval and approves schools meeting our standards. Schools which offer training meeting FOA standards are listed on this website and are authorized to offer FOA certifications.
Note: ONLY schools which are listed here are FOA-Approved and authorized to offer FOA certifications. Schools listed here are required to offer FOA certifications to all students. If you have any questions regarding a school's status with the FOA, contact the FOA staff.

Certified Fiber Optic Technician Training
FOA certifications are widely respected for their comprehensive coverage of the knowledge, skills and abilities (KSAs) they cover, a result of the FOA having access to the most experienced and knowledgeable people in fiber optics to help develop FOA programs. The guidelines for course approval require schools to cover subjects appropriate to the course description, provide proper course materials and instructors who are FOA certified. All courses are reviewed and approved by the FOA and all instructors are FOA-certified.
Students at FOA approved courses are offered FOA membership and certification testing at special student rates.

Apprenticeship Programs
Through the National Joint Apprenticeship and Training Committee (NJATC), the IBEW/NECA Apprenticeship Programs offer the FOA CFOT as part of their programs at locations which are listed here. JATC training is available to IBEW/NECA members only. Contact the NJATC for more information.

Advanced Training
Some schools also offer advanced courses leading to the FOA CPCT, CFxT, AFOT or CFOS certification. Such courses are noted in the school information.

--------------------------------------------------------------------------------

The following training organizations have received FOA approval for their courses:
(Listings are grouped by regions in the USA, roughly by Zip Code, and by country. )
USA: Northeast -Mid Atlantic - South - Midwest - Southwest - West Coast - JATCs
Canada - Worldwide

Source: www.thefoa.org/foa_aprv.htm

Fiber Optic Cable and Fiber Cable

250um Bare Fiber

High-capacity, high-data-rate, long-haul terrestrial networks; optimized for dense wavelength division multiplexing (DWDM) and optical networking technology

Source: www.fiberoptics4sale.com/page/FOFS/CTGY/Fiber_Optic_Cables_Fibers

FOA Certification

Certification
In today's high tech world, certification is considered proof of professional status and is often required for jobs. The FOA was chartered to approve schools offering training and provide certifications as a service to the fiber optic industry. The FOA programs are developed and maintained by experts in the fiber optic business, most of whom have over 20 years of expereince as technicians, installers, manufacturers and teachers of fiber optics.

What is certification? Certification means you have achieved certain performance criteria set by the certifying organization, usually knowledge, skills and abilities (KSAs), either through training or experience. Certifications attest to your KSAs, and their value is the recognition of those KSAs to customers, coworkers and employers. Certification is not a license, which is a official approval of an individual to do business in the jurisdiction issuing the license, such as a state in the USA. Many states in the USA now require licensing for contractors installing communications cabling. Check your local area to determine the requirements for licensing.


The FOA offers three levels of fiber optic certification:

Basic Level:
CFOT - Certified Fiber Optic Technician for general fiber optics
CFxT - Certified Fiber Optic Technician for FTTx (fiber to the home, fiber to the premises, fiber to the curb)
CPCT - Certified Premises Cabling Technician (fiber, copper and wireless in building and campus networks)



Advanced
AFOT - Advanced Fiber Optic Technician, available from FOA-approved schools

Specialist
CFOS or Certified Fiber Optic Specialist. The specialist level, CFOs tests for the applicant's level of knowledge of fiber optics in a broad-based exam that covers technology, components, installation and testing.


Source: www.thefoa.org/Certs.htm

Fiber Optic Imaging

Fiber optic imaging uses the fact that the light striking the end of an individual fiber will be transmitted to the other end of that fiber. Each fiber acts as a light pipe, transmitting the light from that part of the image along the fiber. If the arrangement of the fibers in the bundle is kept constant then the transmitted light forms a mosaic image of the light which struck the end of the bundle.


Source: hyperphysics.phy-astr.gsu.edu/hbase/optmod/fibopt.html

he Fiber Optic Association, Inc.

Sign up to receive the FOA eMail Newsletter .

Time To Renew Your CFOT? You Can Now Renew Online
This Month's FOA Online Newsletter:
Fiber Optics Pioneer Wins Nobel Prize
Unsung Heroes of Fiber Optics
Fiber Replacement for USB?
"Barnraising" Broadband
Fiber Optics - Used for everything including the kitchen sink!

All in the FOA online newsletter.
FOA Creates Printed Textbook To
Complement Online Fiber Optic Reference Guide
The FOA has been working on a printed version of our online reference guide - a new printed reference "textbook" for fiber optics. It at the printers and should be available from Amazon.com shortly. (That's FOA President Jim Hayes, the guy behind the new book, reading a copy.) Order your own copy from Amazon.com for only $24.95.
We've also updated the Online Reference Guide, adding several pages on splicing, including a new fusion splicing section.
Note how the Online Reference pages are formatted - you can easily read them on an iPhone or portable web-enabled device.
Go see The FOA Online Fiber Optic Reference for yourself!

Looking For Fiber Optic Training? You can find a list of FOA approved schools that offer CFOT certification here on the list of FOA approved schools .


Certification Renewal: CFOTs must renew each year to maintain your CFOT or CFOS certifications . Please make sure we have your current address. Download the renewal form. New - Renew Online

Source: www.thefoa.org

Sunday, November 8, 2009

Welcome Letter

Dear Student,

Thank you for your interest in Computer Networking Center. We are committed to providing you the highest quality training at an affordable price and we been doing so for the last 10 years. If you are looking to make a lifetime career change, increase your salary, become more marketable or find more excitement in your workday, CNC is the right choice for you. Our classroom instructors do not just show you slides and read from a book. Computer Networking Center recognizes the importance of practical training. That is why we have designed our training to be “Hands-on” as much as possible. Computer Networking Center wants you to be more than just paper certified for A+ PC Technician, Network +, Security +, Linux +, MCSE(A), CCNA, MOS, and Web Design. We want you to have practical “Hands-on” experience so you can function properly in the real work environment. We will provide you with the lab training, which will be worth at least one-year of field experience. Here are a few of the many benefits of our unique programs:

Seminars with our Certified Instructors and Networking Specialists who have years of proven practical experience.
Real Hands-on experience on different operating systems including Windows 95/98/NT/2000/2003/XP, & Vista Linux, and Novell NetWare.
Windows Computer Based Training (CBT) and third party simulation exams.
Full-Time Instructors with work experience.
FREE Technical Support is provided for students up to 6 months. This is kind of our guarantee to help you get a job and improve your career.

Our goal is to help you learn the state of the art technologies in Networking, and prepare you for the proficiency exams all at an affordable rate. We believe there are a lot of people who want to upgrade themselves, but the expensive training cost is not affordable for most people. We want to help students accomplish their ambition and build their future. If you don’t want to lose this opportunity, call now and reserve a seat for yourself.
If you have any questions or if you want to visit our office, please feel free to call me at 734-462-2090. Thank you again for your interest in Computer Networking Center.

Sincerely

CNC Staff



Source: www.computernetworkingcenter.com

Projects in Network Security

Trusted Grid Computing with Dynamic Resources and Automated
Intrusion Responses (GridSec)

An Algebra for Intrusion Correlation (LATTICED)
Project overview

Coordinated Suppression of Simultaneous Attacks (COSSACK)
Project overview

Fault Tolerant Mesh of Trust Applied to DNSSEC (FMESHD)
Project overview

Fault Tolerant Networking Through Intrusion Identification and Secure Compartments (FNIISC)
Project overview

First Aid for Computer Systems (FACS)
Project overview

Survivability Using Controlled Security Services (SUCSES)
Project overview


Source: www.isi.edu/div7/current_research.html

Projects in Network Architecture

eXplicit Control Protocol Development (XCP)
Project overview

Beyond BGP: Flexible and Scalable Interdomain Routing (BBGP)
Project overview

Extending the X-Bone Overlay Deployment Tool for Research and Classroom Use (STI-XTEND)
Project overview

Optical CDMA LAN (OCL)
Project overview

OptIPuter@ISI: XCP
Project overview


Source: www.isi.edu/div7/current_research.html

Projects in Network Technologies

Dynamic Resource Allocation via GMPLS Optical Networks (DRAGON)
Project overview

Responsible Conferencing: Congestion Control for High Quality Media (CCHQM)
Project overview

Cyber Defense Technology Experimental Research Network (DETER)
Project overview


COVET-WEST
Project overview

A High Definition Collaboratory (UltraGrid)
Project overview

MAC Protocols Specific for Sensor Networks (MACCS)
Project overview

Network Vulnerability Analysis Toolset (NVAT)
Project overview

Pervasive Monitoring and Control of Water Lifeline Systems for Disaster Recovery
Project overview

Protected Virtual Networking API Using a File System Interface (NET-FS)
Project overview


Source: www.isi.edu/div7/current_research.html

Computer Networking (Hardcover)

Available from these sellers.




Source: www.amazon.com/Computer-Networking-Stanford-H-Rowe/dp/0130487376

Networking Research Projects

CASA: Collaborative Adaptive Sensing of the Atmosphere

MINC: Multicast-based Inference of Network-Internal Characteristics

AMPS: Active Multimedia Proxy Services

Fluid models for large heterogeneous networks

Publish-Subscribe networks
Wireless networks
Network Measurement
Fluid Simulation

MANIC: Multimedia Asynchronous Networked Individualized Courseware
Copies of our publications can be found at the CNRG publications page.

Source: gaia.cs.umass.edu/networks/research.html

Peer to Peer File Sharing - P2P Networking

Peer-to-peer (P2P) networking eliminates the need for central servers, allowing all computers to communicate and share resources as equals. Music file sharing, instant messaging and other popular network applications rely on P2P technology.



Source: compnetworking.about.com/od/p2ppeertopeer/Peer_to_Peer_File_Sharing_P2P_Networking.htm

Bluetooth Wireless Technology

Bluetooth is a specification for using low-power radio technology to link phones and computers over short distances without wires. Learn about Bluetooth technology to network cell phones, PDAs, and computer peripherals.


Source: compnetworking.about.com/od/bluetooth/Bluetooth_Wireless_Technology.htm

VPN - Virtual Private Networking

VPN solutions support remote access and private data communications over public networks as a cheaper alternative to leased lines. VPN clients communicate with VPN servers utilizing a number of specialized protocols.



Source: compnetworking.about.com/od/vpn/VPN_Virtual_Private_Networking.htm

New Home Network Technology

New Home Network Technology
New developments in home networks affect more than just home offices and entertainment systems. Some of the most exciting advances are in healthcare and housing.


In healthcare, Wireless Sensor Networks (WSNs) let doctors monitor patients wirelessly. Patients wear wireless sensors that transmit data through specialized channels. These signals contain information about vital signs, body functions, patient behavior and their environments. In the case of an unusual data transmission -- like a sudden spike in blood pressure or a report that an active patient has become suddenly still -- an emergency channel picks up the signal and sends medical services to the patient's home.


The housing industry is another important field for home network technology development. Bill Gates owns one of the few smart houses in existence, but someday, we might all live in one. A smart house is a fully networked structure with functions that can be controlled from a central computer, making it an ideal technology for homeowners who travel frequently or for homeowners who simply want it all.




Builders are beginning to offer home network options for their customers that range from the primitive -- installing Ethernet cables in the walls -- to the cutting-edge -- managing the ambient temperature from a laptop hundreds of miles from home. In one trial experiment called Laundry Time, Microsoft, Hewlett Packard, Panasonic, Proctor & Gamble and Whirlpool demonstrated the power of interfacing home appliances. The experiment networked a washing machine and clothes dryer with a TV, PC and cell phone. This unheard-of combination of networked devices let homeowners know when their laundry loads were finished washing or drying by sending alerts to their TV screens, instant messaging systems or cell phones. Research and development also continues for systems that perform a wide variety of functions -- data and voice recognition might change the way we enter, exit and secure our homes, while service appliances could prepare our food, control indoor temperatures and keep our homes clean.

This technology is promising, but it's not quite ready for the consumer market yet. The average consumer can't afford a WSN or a smart house, and if he could, there's a good chance he or she wouldn't be able to operate these sophisticated systems. Another issue is security -- until developers find a way to secure these networks, consumers risk sharing medical information and leaving their homes open to attack.


Source: computer.howstuffworks.com/home-network4.htm

Wireless Networks

The easiest, least expensive way to connect the computers in your home is to use a wireless network, which uses radio waves instead of wires. The absence of physical wires makes this kind of network very flexible. For example, you can move a laptop from room to room without fiddling with network cables and without losing your connection. The downside is that wireless connections are generally slower than Ethernet connections and they are less secure unless you take measures to protect your network.

If you want to build a wireless network, you'll need a wireless router. Signals from a wireless router extend about 100 feet (30.5 meters) in all directions, but walls can interrupt the signal. Depending on the size and shape of your home and the range of the router, you may need to purchase a range extender or repeater to get enough coverage.


You'll also need a wireless adapter in each computer you plan to connect to the network. You can add printers and other devices to the network as well. Some new models have built-in wireless communication capabilities, and you can use a wireless Ethernet bridge to add wireless capabilities to devices that don't. Any devices that use the Bluetooth standard can also connect easily to each other within a range of about 10 meters (32 feet), and most computers, printers, cell phones, home entertainment systems and other gadgets come installed with the technology.


If you decide to build a wireless network, you'll need to take steps to protect it -- you don't want your neighbors hitchhiking on your wireless signal. Wireless security options include:

Wired Equivalency Privacy (WEP)
WiFi Protected Access (WPA)
Media Access Control (MAC) address filtering
You can choose which method (or combination of methods) you want to use when you set up your wireless router. The IEEE has approved each of these security standards, but studies have proven that WEP can be broken into very easily. If you use WEP, you may consider adding Temporal Key Integrity Protocol (TKIP) to your operating system. TKIP is a wrapper with backward compatibility, which means you can add it to your existing security option without interfering with its activity. Think of it like wrapping a bandage around a cut finger -- the bandage protects the finger without preventing it from carrying out its normal functions.

Source: computer.howstuffworks.com/home-network3.htm

Wired Networks

Ethernet and wireless networks each have advantages and disadvantages; depending on your needs, one may serve you better than the other. Wired networks provide users with plenty of security and the ability to move lots of data very quickly. Wired networks are typically faster than wireless networks, and they can be very affordable. However, the cost of Ethernet cable can add up -- the more computers on your network and the farther apart they are, the more expensive your network will be. In addition, unless you're building a new house and installing Ethernet cable in the walls, you'll be able to see the cables running from place to place around your home, and wires can greatly limit your mobility. A laptop owner, for example, won't be able to move around easily if his computer is tethered to the wall.

There are three basic systems people use to set up wired networks. An Ethernet system uses either a twisted copper-pair or coaxial-based transport system. The most commonly used cable for Ethernet is a category 5 unshielded twisted pair (UTP) cable -- it's useful for businesses who want to connect several devices together, such as computers and printers, but it's bulky and expensive, making it less practical for home use. A phone line, on the other hand, simply uses existing phone wiring found in most homes, and can provide fast services such as DSL. Finally, broadband systems provide cable Internet and use the same type of coaxial cable that gives us cable television.

If you plan to connect only two computers, all you'll need is a network interface card (NIC) in each computer and a cable to run between them. If you want to connect several computers or other devices, you'll need an additional piece of equipment: an Ethernet router. You'll also need a cable to connect each computer or device to the router.

Once you have all of your equipment, all you need to do is install it and configure your computers so they can talk to one another. Exactly what you need to do depends on the type of network and your existing hardware. For example, if your computers came with network cards already installed, all you'll need to do is buy a router and cables and configure your computers to use them. Regardless of which type you select, the routers, adapters and other hardware you buy should come with complete setup instructions.


The steps you'll need to take to configure your computers will also vary based on your hardware and your operating system. User manuals usually provide the necessary information, and Web sites dedicated to specific operating systems often have helpful tips on getting several different computers to talk to each other.

Source: computer.howstuffworks.com/home-network2.htm

Building a Home Network

The two most popular home network types are wireless and Ethernet networks. In both of these types, the router does most of the work by directing the traffic between the connected devices. By connecting a router to your dial-up, DSL or cable modem, you can also allow multiple computers to share one connection to the Internet.

If you're going to connect your network to the Internet, you'll need a firewall. A firewall is simply a hardware device or software program that protects your network from malicious users and offensive Web sites, keeping hackers from accessing or destroying your data. Although they're essential for businesses looking to protect large amounts of information, they're just as necessary for someone setting up a home network, since a firewall will secure transactions that might include Social Security numbers, addresses, phone numbers and credit card numbers. Most routers combine wireless and Ethernet technology and also include a hardware firewall.

Many software firewalls installed onto your computer block all incoming information by default and prompt you for permission to allow the information to pass. In this way, a software firewall can learn which types of information you want to allow into your network. Symantec, McAfee and ZoneAlarm are popular companies that produce software-based firewalls. These companies usually offer some free firewall protection as well as advanced security that you can buy.

A router connects your computers to one another. If you connect it to your modem, it will also connect your network to the Internet.


Source:computer.howstuffworks.com/home-network1.htm

How Home Networking Works

Once, home networks were primarily the realm of technophiles -- most families either didn't need or couldn't afford more than one computer. But now, in addition to using computers for e-mail, people use them for schoolwork, shopping, instant messaging, downloading music and videos, and playing games. For many families, one computer is no longer enough to go around. In a household with multiple computers, a home network often becomes a necessity rather than a technical toy.


A home network is simply a method of allowing computers to communicate with one another. If you have two or more computers in your home, a network can let them share:
Files and documents
An Internet connection
Printers, print servers and scanners
Stereos, TVs and game systems
CD burners
The different network types use different hardware, but they all have the same essential components:


More than one computer
Hardware (such as a router) and software (either built in to the operating system or as a separate application) to coordinate the exchange of information
A path for the information to follow from one computer to another

Source:computer.howstuffworks.com/home-network.htm

Computer Networking:

Vinton G. Cerf
Senior Vice President, Data Services Division
MCI Telecommunications Corporation

--------------------------------------------------------------------------------

The Internet Phenomenon
The Internet has gone from near-invisibility to near-ubiquity in little more than a year. In fact, though, today's multi-billion dollar industry in Internet hardware and software is the direct descendant of strategically-motivated fundamental research begun in the 1960s with federal sponsorship. A fertile mixture of high-risk ideas, stable research funding, visionary leadership, extraordinary grass-roots cooperation, and vigorous entrepreneurship has led to an emerging Global Information Infrastructure unlike anything that has ever existed.

Although not easy to estimate with accuracy, the 1994 data communications market approached roughly $15 billion/year if one includes private line data services ($9 billion/year), local area network and bridge/router equipment ($3 billion/year), wide area network services ($1 billion/year), electronic messaging and online services ($1 billion/year), and proprietary networking software and hardware ($1 billion/year). Some of these markets show annual growth rates in the 35-50% range, and the Internet itself has doubled in size each year since 1988.

As this article is written in 1995, the Internet encompasses an estimated 50,000 networks worldwide, about half of which are in the United States. There are over 5 million computers permanently attached to the Internet [as of mid-1996 the number is between 10 and 15 million!], plus at least that many portable and desktop systems which are only intermittently online. (There were only 4 computers on the ARPANET in 1969, and only 200 on the Internet in 1983!) Traffic rates measured in the recently "retired" NSFNET backbone approached 20 trillion bytes per month and were growing at a 100% annual rate.

What triggered this phenomenon? What sustains it? How is its evolution managed? The answers to these questions have their roots in DARPA-sponsored research in the 1960s into a then-risky new approach to data communication: packet switching. The U.S. government has played a critical role in the evolution and application of advanced computer networking technology and deserves credit for stimulating wide-ranging exploration and experimentation over the course of several decades.

Evolutionary Stages
Packet Switching

Today's computer communication networks are based on a technology called packet switching. This technology, which arose from DARPA-sponsored research in the 1960s, is fundamentally different from the technology that was then employed by the telephone system (which was based on "circuit switching") or by the military messaging system (which was based on "message switching").

In a packet switching system, data to be communicated is broken into small chunks that are labeled to show where they come from and where they are to go, rather like postcards in the postal system. Like postcards, packets have a maximum length and are not necessarily reliable. Packets are forwarded from one computer to another until they arrive at their destination. If any are lost, they are re-sent by the originator. The recipient acknowledges receipt of packets to eliminate unnecessary re-transmissions.

The earliest packet switching research was sponsored by the Information Processing Techniques Office of the Department of Defense Advanced Research Projects Agency, which acted as a visionary force shaping the evolution of computer networking as a tool for coherent harnessing of far-flung computing resources. The first experiments were conducted around 1966. Shortly thereafter, similar work began at the National Physical Laboratory in the UK. In 1968 DARPA developed and released a Request for Quotation for a communication system based on a set of small, interconnected computers it called "Interface Message Processors" or "IMPs." The competition was won by Bolt Beranek and Newman (BBN), a research firm in Cambridge, MA, and by September 1969 BBN had developed and delivered the first IMP to the Network Measurement Center located at UCLA. The "ARPANET" was to touch off an explosion of networking research that continues to the present.

Apart from exercising leadership by issuing its RFQ for a system that many thought was simply not feasible (AT&T was particularly pessimistic), DARPA also set a crucial tone by making the research entirely unclassified and by engaging some of the most creative members of the computer science community who tackled this communication problem without the benefit of the experience (and hence bias) of traditional telephony groups. Even within the computer science community, though, the technical approach was not uniformly well-received, and it is to DARPA's credit that it persevered despite much advice to the contrary.

ARPANET

The ARPANET grew from four nodes in 1969 to roughly one hundred by 1975. In the course of this growth, a crucial public demonstration was held during the first International Conference on Computer Communication in October 1972. Many skeptics were converted by witnessing the responsiveness and robustness of the system. Out of that pivotal meeting came an International Network Working Group (INWG) composed of researchers who had begun to explore packet switching concepts in earnest. Several INWG participants went on to develop an international standard for packet communication known as X.25, and to lead the development of commercial packet switching in the U.S., Canada, France, and the UK, specifically for systems such as Telenet, Datapac, Experimental Packet Switching System, Transpac, and Reseau Communication par Paquet (RCP).

By mid-1975, DARPA had concluded that the ARPANET was stable and should be turned over to a separate agency for operational management. Responsibility was therefore transferred to the Defense Communications Agency (now known as the Defense Information Systems Agency).

New Packet Technologies

ARPANET was a single terrestrial network. Having seen that ARPANET was not only feasible but powerfully useful, DARPA began a series of research programs intended to extend the utility of packet switching to ships at sea and ground mobile units through the use of synchronous satellites (SATNET) and ground mobile packet radio (PRNET). These programs were begun in 1973, as was a prophetic effort known as "Internetting" which was intended to solve the problem of linking different kinds of packet networks together without requiring the users or their computers to know much about how packets moved from one network to another.

Also in the early 1970s, DARPA provided follow-on funding for a research project originated in the late 1960s by the Air Force Office of Scientific Research to explore the use of radio for a packet switched network. This effort, at the University of Hawaii, led to new mobile packet radio ideas and also to the design of the now-famous Ethernet. The Ethernet concept arose when a researcher from Xerox PARC spent a sabbatical period at the University of Hawaii and had the insight that the random access radio system could be operated on a coaxial cable, but at data rates thousands of times faster than could then be supported over the air. Ethernet has become a cornerstone of the multi-billion dollar local area network industry.

These efforts came together in 1977 when a four-network demonstration was conducted linking ARPANET, SATNET, Ethernet and the PRNET. The satellite effort, in particular, drew international involvement from participants in the UK, Norway, and later Italy and Germany.

The Internet Protocols

Another DARPA effort of the early 1970s involved research at Stanford to design a new set of computer communication protocols that would allow multiple packet networks to be interconnected in a flexible and dynamic way. In defense settings, circumstances often prevented detailed planning for communication system deployment, and a dynamic, packet-oriented, multiple-network design provided the basis for a highly robust and flexible network to support command-and-control applications.

The first phase of this work culminated in a demonstration in July 1977, the success of which led to a sustained effort to implement robust versions of the basic Internet protocols (called TCP/IP for the two main protocols: Transmission Control Protocol and Internet Protocol). The roles of DARPA and the Defense Communications Agency were critical both in supplying sustained funding for implementing the protocols on various computers and operating systems and for the persistent and determined application of the new protocols to real needs.

By 1980, sufficient experience had been gained that the design of the protocols could be frozen and a serious effort mounted to require all computers on the ARPANET to adopt TCP/IP. This effort culminated in a switch to the new protocols in January 1983. ARPANET had graduated to production use, but it was still an evolving experimental testbed under the leadership of DARPA and DCA.

ARPANET -> NSFNET -> Internet

As DARPA and DCA were preparing to convert the organizations they supported to TCP/IP, the National Science Foundation started an effort called CSNET (for Computer Science Network) to interconnect the nation's computer science departments, many of which did not have access to ARPANET. CSNET adopted TCP/IP, but developed a dial-up "Phone-mail" capability for electronic mail exchange among computers that were not on ARPANET, and pioneered the use of TCP/IP over the X.25 protocol standard that emerged from commercial packet switching efforts. Thus, the beginning of the 1980s marked the expansion of U.S. government agency interest in networking, and by the mid-1980s the Department of Energy and NASA also had become involved.

NSF's interest in high-bandwidth attachment was ignited in 1986 after the start of the Supercomputer Centers program. NSF paved the way to link researchers to the Centers through its sponsorship of NSFNET, which augmented ARPANET as a major network backbone and eventually replaced ARPANET when ARPANET was retired in 1990. Then-Senator Gore's 1986 legislation calling for the interconnection of the Centers using fiber optic technology ultimately led the administration to respond with the High Performance Computing and Communications (HPCC) Initiative.

Among the most critical decisions that NSF made was to support the creation of "regional" or "intermediate-level" networks that would aggregate demand from the nation's universities and feed it to the NSFNET backbone. The backbone itself was initially implemented using gateways (systems used to route traffic) developed at the University of Delaware and links operating at the ARPANET speed of 56K bps. Because of rapidly increasing demand, though, NSF in 1988 selected MERIT (at the University of Michigan) to lead a cooperative agreement with MCI and IBM to develop a 1.5M bps backbone. IBM developed new routers and MCI supplied 1.5M bps circuits, and NSFNET was reborn with roughly 30 times the bandwidth of its predecessor.

The regional networks quickly became the primary means by which universities and other research institutions linked to the NSFNET backbone. NSF wisely advised these networks that their seed funding would have limited duration and they would have to become self-sustaining. Although this took longer than originally expected, most of the regional networks (such as BARNET, SURANET, JVNCNET, CICNET, NYSERNET, NWNET, and so on) now have either gone into for-profit mode or have spun off for-profit operations.

Because of continued increases in demand, NSF recently re-visited the cooperative agreement with MCI, IBM and MERIT. A non-profit organization, Advanced Networks and Services (ANS), was born, and has satisfied the current demand for Internet capacity using 45M bps circuits. The name "Internet" refers to the global seamless interconnection of networks made possible by the protocols devised in the 1970s through DARPA-sponsored research -- the Internet protocols, still in use today.

A Commercial Market Emerges

By the mid-1980s there was sufficient interest in the use of Internet in the research, educational, and defense communities that it was possible to establish businesses making equipment for Internet implementation. Companies such as Cisco Systems, Proteon, and later Wellfleet (now Bay Networks) and 3Com became interested in manufacturing and selling "routers," the commercial equivalents of the "gateways" that had been built by BBN in the early ARPANET experiments. Cisco alone is already a $5 billion business, and others seem headed rapidly toward that level.

The previous subsection noted the "privatization" of the NSF regional networks. NYSERNET (the New York State regional network) was the first to spin out a for-profit company, Performance Systems International, which now is one of the more successful Internet service providers. Other Internet providers actually began as independent entities; one of these is UUNET, which started as a private non-profit but turned for-profit and began offering an Internet service it calls ALTERNET; another is CERFNet, a for-profit operation initiated by General Atomic in 1989; a third is NEARNet, started in the Boston area and recently absorbed into a cluster of for-profit services operated as BBN Planet (recall that BBN was the original developer of the ARPANET IMP; BBN also created the Telenet service, which it sold to GTE and which subsequently became Sprintnet).

In 1988, in a conscious effort to test Federal policy on commercial use of Internet, the Corporation for National Research Initiatives approached the Federal Networking Council (actually its predecessor, the Federal Research Internet Coordinating Committee) for permission to experiment with the interconnection of MCI Mail with the Internet. An experimental electronic mail relay was built and put into operation in 1989, and shortly thereafter Compuserve, ATTMail and Sprintmail (Telemail) followed suit. Once again, a far-sighted experimental effort coupled with a wise policy choice stimulated investment by industry and expansion of the nation's infrastructure. In the past year, commercial use of the Internet has exploded.

The Roaring '90s: Privatization, and the World Wide Web
The Internet is experiencing exponential growth in the number of networks, number of hosts, and volume of traffic. NSFNET backbone traffic more than doubled annually from a terabyte per month in March 1991 to eighteen terabytes a month in November 1994. (A terabyte is a thousand billion bytes!) The number of host computers increased from 200 to 5,000,000 in the 12 years between 1983 and 1995 -- a factor of 25,000! [It has doubled again -- to between 10 and 15 million -- between 1995 and mid-1996!]
As 1995 unfolds, many Internet service providers have gone public and others have merged or grown by acquisition. Market valuations of these companies are impressive. America Online purchased Advanced Networks and Services for $35 million. Microsoft supplied more than $20 million in capital to UUNET for expansion. UUNET and PSI have gone public. MCI has unveiled a major international Internet service, as well as an information and electronic commerce service called marketplaceMCI. AT&T is expected to announce a major new service later in the year. Other major carriers such as British Telecom, France Telecom, Deutsche Telekom, Swedish Telecom, Norwegian Telecom, and Finnish Telecom, among many others, have announced Internet services. An estimated 300 service providers are in operation, ranging from very small resellers to large telecom carriers.

In an extraordinary development, the NSFNET backbone was retired at the end of April 1995, with almost no visible effects from the point of view of users (it was hard work for the Internet service providers!). A fully commercial system of backbones has been erected where a government sponsored system once existed. Indeed, the key networks that made the Internet possible (ARPANET, SATNET, PRNET and NSFNET) are now gone -- but the Internet thrives!

One of the major forces behind the exponential growth of the Internet is a variety of new capabilities in the network -- particularly directory, indexing, and searching services that help users discover information in the vast sea of the Internet. Many of these services have started as university research efforts and evolved into businesses. Examples include the Wide Area Information Service, Archie (which spawned a company called Bunyip in Canada), LYCOS from Carnegie Mellon, YAHOO from Stanford, and INFOSE.


Source: www.cs.washington.edu/homes/lazowska/cra/networks.html

Friday, November 6, 2009

NUMERICAL APERTURE





The figure on the right depicts a section of a clad cylindrical fiber showing the core with refractive index of N1 and the clad with index of N2. Also shown is a light ray entering the end of the fiber at angle (A), reflecting from the interface down the fiber. However, if angle A becomes too great, the light will not reflect at the interface, but will go out the side of the fiber and be lost. This angle, beyond which light cannot be carried in a fiber, is called the CRITICAL ANGLE and may be calculated from the two indices of refraction.



To calculate the Critical Angle, first determine the N.A. (Numerical Aperture).The N.A. of any glass combination may be calculated as follows: (where N1= the index of refraction of the core glass), and N2=(the index of refraction of the cladding glass):

For example, taking 1.62 for N1 and 1.52 for N2 , we find the NA to be .56. By calculating the arc sine (sin-1) of .56 ( 34 degrees) we determine THE CRITICAL ANGLE.

As this fiber accepts light up to 34 degrees off axis in any direction, we define the ACCEPTANCE ANGLE of the fiber as twice the critical angle or in this case, 68 degrees.

Of course, if you'd like to use the related calculator, Click Here. If you already know the N.A and just need to know the acceptance angle, Click Here.

For your further information, the F/ NUMBER EQUIVALENT of the N.A. is calculated as follows:


The Numerical Aperture is an important parameter of any optical fiber, but one which is frequently misunderstood and overemphasized. In the first illustration above, notice that angle A is shown at both the entrance and exit ends of the fiber. This is because the fiber tends to preserve the angle of incidence during propagation of the light, causing it to exit the fiber at the same angle it entered. Now look at the figure below, which is a drawing of a typical light guide being illuminated by a projector type lamp.


Angle A (29 degrees) is the acceptance angle of a N.A. .25 fiber. Angle B (45 degrees) is the incident angle from the bulb. Angle C (83 degrees) is the acceptance angle of a N.A. .66 fiber.


Calculating the N.A. for the 45 degree angle (B) of incidence yields .38 (sin(45/2)). Therefore, fiber with an N.A. of .66 will accept all of the light from the bulb, but the output cone at the other end will be 45 degrees, not the 83 degrees that you might expect. Conversely, the N.A. .25 fiber is not capable of accepting all the light from the bulb. Any light transmitted through this fiber will create an output cone of 29 degrees.

Many people believe that using a low N.A. fiber will "focus" the light from a wider N.A. source. This is not true. As you see, the lower N.A. fiber simply has a lower acceptance angle. While the resulting output will be projected into a tighter area, the overall light transmitted is less than what might be transmitted through a higher N.A. fiber. To focus light from a source, a lens assembly must be used to gather all available light and change the incident angle (and resulting N.A.) to match, (or be less than) the N.A. of the fiber being used.

Many people believe that using a low N.A. fiber will "focus" the light from a source. This is not true. A narrow N.A. fiber simply admits less light than a wider N.A. fiber, assuming the source is emitting light at a wide N.A..

Source:www.fiberoptix.com/technical/numerical-aperature.html

Specifications of our Fiber and Cable

End Emitting or Endpoint Fiber (POF): Made for many uses in many applications such as: Interior or exterior signage, scale models, floral displays, hobbyists and underwater spectacles in aquariums.


End-Point Fiber Specs:
Plastic optical fiber is a total reflective type which has a concentric double structure consisting of a core of transparent polymethylmethcrylate (PMMA) of a high refractive index, covered with a thin layer of special transparent cladding material of a low refractive index. The light entering from one end is transmitted, repeating the total reflection and then is discharged from the other end.
Endpoint fiber optic PMMA (polymethylmethcrylate) Acrylic Fiber Optics come in many size diameters from .25mm up to 3mm. The Attenuation is measured with a 1mm fiber at 650 nm-collimated light. Here are its specifications:

Refractive Index Core: 1.492

Reflective Index Clad: 1.402

Numerical Aperture: 0.51

Temperature Range: -55� C ~ 70� C

Maximum Attenuation: 0.2 dB/m (0.18 dB/m Typical)

Applications: Signs, illumination, sensors, lamps, data link (short distance)



Sideglow Cable Specs:

As an alternative to neon, solid core Sideglow optic cable can be used to decorate buildings, highlight architectural features and exterior signage. It can be used in such applications as underwater lighting in swimming pools and spas, backlighting of signage, glass block lighting, cove lighting and landscape lighting.

Sideglow fiber optic cable is a single large diameter Solid optical gel core made from optically pure cast acrylic monomers, including MMA, to ensure flexibility, and superior light transmission. It can transmit light over reasonable long distances. Light is transmitted over the entire length of the solid core cable without electrical danger, or heat. The light source such as a halogen light box is used as an illuminator, which is place at one or both ends of the fiber. A color wheel can be added to the illuminator to change the color of the fiber to as many as eight colors. Its bend radius is less than 6 X its diameter.
Utilizing a recently perfected production process called continuous casting, Sideglow solid cable can now give off brilliant color clarity and a continuous bright light transmission that was not possible in earlier solid core fibers. A crystal clear Teflon sleeve gives high intensity brightness along the entire length of the optic cable. The distance of light carried is only for 100' or less before needing to add another illuminator or loop back to the light source.
The optic fiber is energy efficient, flexible, and requires virtually no maintenance. It is available in 5.5mm (1/5"), 7mm (1/4") 9mm (3/8") and 12.7mm (1/2") diameters. Spool lengths are 260 continuous feet of Cable.

Sideglow cable Specs:

Temperature Stability:
Core to 120 deg. C. (248 deg.F) Cladding to 390 deg. C. (734 deg.F)
Operating Temp. Range:
Minimum: minus 40 deg.C (-104 deg.F) Maximum:plus120 deg C (248 deg.F)
Moisture Absorption: Core composition is hydroscopic. Optics ends must be sealed to avoid absorption.
Chemical resistance: Teflon cladding is chemically resistant and impervious to solvents. The core is affected by strong solvents.
Storage: Dark/dry location where temperature is within specifications.

Spectral Range: 370 to 690 nm�visible wavelength range

Acceptance Angle: 45 deg.

Numerical Aperture: 0.65

Glass Transition Temp: 53.8 deg. C

Attenuation: Less than 1.6% per foot (5.3% per meter)

As with any type of illumination, lighting via optical fiber requires the answers to a few questions before a successful installation can be executed. Fiber optics can distribute and project high quality illumination that can supplement traditional methods, often to an advantage. Additionally, fiber optics can perform dazzling tricks that no other form of lighting can touch. They are often used purely for their unique aesthetic, often referred to as "flexible neon".

Generally, the first question to answer is "How much light is required?" For task and area lighting, various groups and agencies have established light levels as standards and guidelines. Or, experience dictates what is appropriate for a given application. In either case, the need will be to project light into an area via fiber optics and knowing what needs to come out will determine what needs to go in. For the "flexible neon" or, what is referred to as "side-illuminated" effects, it is still necessary to determine how bright the glow will need to be to provide the desired effect. In many cases, particularly for side-illuminated fiber, experimentation may be the only way to determine what looks best. At first blush, it would seem that to evaluate fiber optics for the purposes of illumination, it would be good to start with photometric measurements. These might be a foot-candle (lux) measurement for task lighting; a lumen measurement for raw output; and in the case of edge-illuminated fiber, a foot-lambert (nit) for "brightness".

But unless you already have the fiber, the light source, luminaries (for end-light), tracking (for sidelight), and all the various other bits, you will not be able to make these measurements. And neither can any of the manufacturers of optical fiber. There are just too many variables in installations that affect these three measurements for any manufacturer to be able to reasonably duplicate them all. When it comes to optical fiber, any raw output data for fiber optic illumination are meaningless... period. A fiber optic illumination system is a little different. When the light source and fiber (at set lengths), along with all the other accessories are evaluated together, then manufacturers and suppliers can provide useful photometric data. But you must still be careful to follow the installation instructions very closely.

Optical fiber is a passive conductor of light; the measure of ultimate "brightness" will be largely a function of the light source powering the fiber. Most Tec's consider loss or attenuation to be the most important evaluation parameter. All fibers exhibit attenuation that will prevent 100% transmission. The amount of loss will depend on many factors; the type of media used in the fiber core, the surface geometry of the core/clad interface, mechanical stresses imposed on the fiber, and the finish quality on the input and output ends of the fibers, among others.

The second most important criteria are numerical aperture, which affects the light gathering ability of the fiber.

And the third, unique to side-illuminated fiber, is the evenness of the glow effect. Scattering effects within the fiber core and cladding force light to be directed out of the fiber, something typically avoided with most optical fiber.

ATTENUATION

All fiber experiences losses, and this shows up as two distinct but related forms. The gross attenuation of a fiber is concerned with broad band losses that affect the transmission of light. This figure of merit for a fiber is the loss or attenuation value presented in manufacturer's literature. It is most often given as "%/foot", "%/meter", "dB/foot", or dB/meter". The other form of attenuation is often the most important however. Fiber losses will affect certain portions of the visible spectrum more than others. Color shifting results from the selective transmission and attenuation of various wavelengths of light passing through the fiber. These losses are minimized by using extremely pure base materials, by designing polymers that are will better carry the visible wavelengths, and by incorporating high-finesse fiber geometry. FOP takes advantage of all three of these to produce the best fiber available with the least amount of spectral attenuation (color shifting).

It is important when evaluating fiber (ours or anyone else�s) on the basis of loss and color shift to make sure the same "white balance" is used, or that the data has been "normalized". This means that the data are adjusted in reference to the spectral content of the light source to eliminate fluctuations in the source colors. The light source has a dramatic affect on the measurement of fiber performance. Two sources rated identically in terms of wattage can yield vastly different results. Even if both units were rated the same in terms of optical power, there can be huge differences if the white balance is not the same or the data not normalized.

MEASURING LOSSES

This is how most manufacturers measure gross loss in a fiber: A relatively long sample of fiber is illuminated and the output is measured. A pre-determined section of the fiber is then removed from the output end, and another measurement is taken. The same length section is then again removed from the output end of the fiber, and another measurement taken. This continues until the remaining sample can no longer be cut back by the same amount. From these measurements, it is possible to calculate a loss factor. This is known as the "cut-back" method.

LOGARITHMIC SCALE

As mentioned above, loss factors are presented as a logarithmic value such as a percentage of loss per unit length such as "%/foot", or in decibels as "dB/meter", or "dB/ft". The use of decibels (dB) to measure light may be confusing at first, but in this case, "decibel" simply refers to the logarithmic nature of loss in the fiber, exactly as the term relates to loss or attenuation in sound and radio signals. Coincidentally, logarithmic values are also appropriate for measuring light and sound because of the way our senses work. In order to deal with the huge variation in energy levels we encounter in nature, both the eye and the ear exhibit the same logarithmic sensitivity to their respective stimuli. Understanding the logarithmic nature of light is extremely important when evaluating the visual performance of fiber optics prior to installation, because it is the one figure that relates to subjective brightness.

The logarithmic sensitivity of our eyes requires a doubling of optical power to perceive an increase in "brightness". This doubling requires a 3dB gain. Conversely, a drop of 3dB would bring perceived brightness down to the next perceptible level. Our hearing is the same (as any car-stereo installer will tell you!). Double the power of an amplifier, and you get just a bit more sound. In both cases, this logarithmic nature allows us to safely observe variations in light and sound over a 10,000:1 ratio. (When verifying this fact by experiment, it is necessary to maintain a constant spectral content as power is being reduced, a difficult task in practice when producing both aural or visual stimuli. While insensitive to gross power differences, the eye and the ear are both highly sensitive to changes in spectral balance.) All things being equal (and that is saying a lot), a length of fiber with an attenuation factor of .2 dB/ft. will have dropped to the next level of perceived brightness after 15 feet. (.2 x 15= 3dB).



LIGHT "QUALITY" AND COLOR BALANCE

Many factors such as contrast ratio, color, viewing angle, and ambient light conditions will affect the observed quality of light from either type fiber. When using end-illuminated fiber for task and/or area lighting, be prepared to adjust the spectral content of the output to match standard illumination sources. Very typically, for polymer based fibers, there will be a shift towards the green or yellow-green part of the spectrum after several feet. Correction filters used in photography to correct the color temperature of various lights may or may not work, depending on the light source spectral content, and the length of fiber used. Ideally, the proper correction filter would display the inverse (or "opposite") function of the fiber spectral attenuation curve. This would "balance" the various hues in exact proportion to each other by filtering out those portions of the spectrum shifting the color away from white light. Most manufacturers supply data concerning the spectral attenuation of their fiber products, and these can give you a good idea of what to expect in the field.

NUMERICAL APERATURE, F#, AND ACCEPTANCE ANGLES

After attenuation, the numerical aperture is the next important consideration. Bear in mind that higher or lower NA�s (wider or narrower acceptance angles) does not make a fiber "better" or "worse". In some applications, there may be an advantage to the wider spread of light possible from larger NA fibers, but there are practical trade-offs that may cancel out any gains. Similarly, the narrow angles of low NA fiber can improve light source coupling, but may impose other constraints, such as higher cost. It should be noted that currently, there are no manufacturers of large core (6mm and up), polymer fiber manufacturing small NA fiber, though there are some smaller diameter fibers. (We consider NA�s under .45 to be small, those over .45 to be large.)

The way fiber optics work (dictated by physics) imposes limits on the angles through which light can enter the fiber. This limit is called the Numerical Aperture (NA) of the fiber, and has the same affect as the aperture in a camera lens. That effect is to limit the angles of light rays passing into the system. Both can be evaluated in terms of F#, NA, or acceptance angles. Like camera apertures, the "faster" the fiber, the more light it can collect. A camera lens of F#1.0 is considered very fast. A fiber at F#1.0 is about average. This is equivalent to a numerical aperture of .50; an acceptance angle of 45 degrees (full angle).

Here is the relationship:

F#= distance to target/ diameter of spot at target

NA= 0.5/F#

Acceptance angle (full) = 2 x sin e-1 [NA]

Just how the NA affects performance can be illustrated in the following: Imagine yourself in a dark room with a window that has been painted over so as to be opaque. You want to look outside, so you start to scratch the paint off the window but make the smallest of holes so your activity won�t be noticed. You have to bring your eye right up to the hole in order to see out and you really can�t look around because to hole is too small to see much more than straight ahead. So you open the hole a little more (at great risk of being caught!) and look again. Now that the hole is larger, you can see over a wider range of angles. By increasing the whole diameter- the aperture, you allow light rays from more angles to pass into your eye. Not only that, but the room gets brighter as the hole becomes larger. The aperture for optical fiber is a little different in terms of how it is formed, but the effect is exactly the same. The larger the aperture, the more light you can "couple" into the fiber. But rather than being a simple hole in a surface, the aperture of a fiber is formed by what is called the "critical angle".

Fiber Optic Products fiber has a NA of .66 which calculates to an acceptance angle of 82.59 degrees, and an F# of .33. What is actually useful though, is usually somewhat less. Practical limits on the perfection of fiber geometry and chemical composition, as well as installation-specific effects, all work to decrease the useful angle. So most designers don�t feel the need to run light out to the maximum acceptance angle. The half power points in the angle vs. throughput graph are often used to set what we call the "working acceptance angle". Additionally, finding light sources that efficiently operate at the extreme wide angles is also difficult... try going to a light source designer and tell them you want a light cone converging at F#=.33 and watch them squirm!



CRITICAL ANGLES

Contrary to popular belief, fiber optics are almost never "silvered on the inside", or hollow, though some exotic fibers are either or both. The vast majority of optical fiber relies on the phenomenon of total internal reflection, to conduct light from end to end. In the same manner as the sky is reflected from hot pavement, light traveling through a fiber is re-directed back into the core whenever it begins to wander out. And just like a hot pavement mirage, the effect only works at certain angles. Exceed the critical angle and you see pavement and not the reflection of the sky. Or, in the case of fiber, the light passes through the side instead of getting a nudge back into the core.

What determines this critical angle is the relationship between the fiber core (equivalent to the relatively cool air several inches above the pavement), and the fiber cladding (equivalent to the layer of hot air hugging the road surface). It is this relative difference in "optical density" (better known as "refractive index") between the hot and cold air over the pavement, and the core and cladding of the fiber that provides the mechanism for reflection. Light travels faster in denser mediums. When a ray of light encounters the "interface"- the boundary between more dense and less dense media. The laws governing the conservation of energy dictate that the energy present in the ray will have to undergo a transformation if it is to pass through into a different density medium- a process that cannot occur without loss. Re-direction incurs less of a loss penalty than transmission. And like so many things in nature, light will tend to follow the path of least resistance, and that path is back into the core of the fiber, the lowest energy-loss option.



MODES

Up to a point then, larger NA�s mean the fiber can gather light from wider angles. But past a NA of about .60, equivalent to an acceptance angle of around 70 degrees, there isn�t much useful light available. "Mode stripping" in the fiber removes much of the light at the extremes of the acceptance angle. In discussing or reading about optical fiber, the term "mode" often comes up. "Single-mode" and "multimode" are used to describe two basic classes of fiber optics. A mode is simply a path that a ray of light can take through an optical conductor. Thus, "single-mode" refers to an optical conductor that allows only one path for the light ray to follow. In order for this to happen, the optical conductor must be very small... generally on the order of 50 microns, or about 1/2000 of an inch. But the advantages gained for applications like high speed data transmission are so significant, that use of such tiny structures are routine and relatively simple to use.

For the job of illumination however, much larger fibers are more efficient. These are "multimode" fibers; countless modes in the case of the very large fiber we manufacture. But the "order" of the modes, which (in a serious over simplification) refers to the relative number of "bounces" the light ray takes as it passes through the fiber, has limits. On the low end, the lowest order mode possible would be a straight path right down the fiber with no bounces. On the other end of the scale, the highest order mode is the ray following a path right at the critical angle. So long as it doesn�t exceed this angle, a light ray following this high-order mode will travel from end to end.

But like a driver careening down a one-lane mountain road with no shoulder, these highest order modes are prone to being lost "over the edge" if there something goes wrong. Because they are so close to the critical angle, these rays are the first to be lost or "stripped" if something causes them to exceed the critical angle. There are many things that can cause this to happen. Microscopic deviations from a smooth surface, scattering effects, too tight of bend radii or clamps tightened too tight can all have this affect. The point here is that the highest order modes is lost pretty quickly. There are no perfectly smooth, flat, transparent materials and so some mode stripping is always going to occur. This is why again, contrary to popular belief, output angles are not the same as input angles.

EVEN OUTPUT

For side illuminated applications, the evenness of the light spread along the length of the fiber is as important as overall clarity. By taking advantage of controlled molecular-scattering phenomena, we have tailored our side-illuminating fiber to achieve a high degree of evenness along lengths up to 260-ft. (80m). Evenness is also highly dependent upon the illuminator and "launch" conditions. Projecting the light from the lamp into the fiber at narrower-than- normal angles can improve the evenness of light over longer lengths. But there is no agreed upon method for gauging "evenness" or even an agreement on just what should be measured. Reputation and testing a sample on a subjective level is often the only way to get an idea of what to expect.

Manufacturing quality is also an important factor affecting the evenness of the glow-effect. Material purity and consistency, and the quality of the core/clad interface will contribute significantly to the quality of the glow. The installation will also affect glow quality. Tight bend radii, tight clamping or tracking, excessive bending and kinking prior to installation will produce deleterious affects.

After the optical considerations have been addressed, the mechanical aspects of installing fiber need to be looked at. Some of these will affect what is practically possible, particularly heat.

An ideal light source for fiber illumination would contain no invisible radiation- no infrared and no ultraviolet. But aside from lasers (which are monochromatic or quasi-chromatic at best) no such source exists. But even if there was such a source, the contribution of visible light to the heat load on a fiber can be considerable, with varying results depending on the fiber material. Why? Because no medium is loss-less and virtually all fibers will have "absorbency" losses. This is where atomic and molecular structures resonate with photons of various wavelengths of light and convert them to heat (seeing also the section on spectral attenuation, above). So as the visible radiant energy applied to the end of a plastic optical fiber increases, so too heat increases. Additionally, because it is impossible (or nearly so) to produce enough visible light to be useful without producing infrared energy, there is further heat load contribution to the system by the effects of the infrared light.

THERMAL RADIATION

Heat will cause a polymer fiber to burn if it is too high for too long. Far more common however, are changes in the optical and mechanical properties of the fiber that occur well before burning. The extra heat can further "polymerize" the core materials and affect the way they transmit light and the stiffness of the fiber. As it works out, the more flexible a fiber is, the poorer it�s transmissive qualities. The plastisizers used to make the fiber flexible absorb a certain amount of light. These along with several other factors, many which are mutually exclusive, dictate a balancing act for the chemist formulating the fiber polymer. A choice has to be made: either sacrifice clarity for flexibility, or sacrifice flexibility for clarity. Fiber Optic Products fiber is formulated to favor clarity over flexibility. The penalty is not severe, the fiber is flexible when installed, but becomes stiffer with photo/thermal-activity. Fiber clarity is improved and color shifting reduced over time by the same activity.

But in spite of using some heat to advantage, service temperatures that are too high will degrade the fiber over time. Illuminator design can become very complex for this reason. Illuminator designers have opted for safer, lower wattage halogen lamps, or using lamps such as the metal halide light variety which have inherently less infrared energy. Always follow the light source manufacturers� instructions for both using the illuminator and connecting the fibers to it. An interesting note: we have had customers report the sensation of heat accompanying the light from the end of some fibers. This isn�t infrared energy being transmitted through the fiber, but rather the visible light being converted to heat by the skin and underlying tissues and blood supply.

"UV" EXPOSURE

At the other end of the visible spectrum is ultraviolet. When it comes to polymers, "ultra-violence" is more like it! (A nod to Anthony Burgess, author of "A Clockwork Orange") Ultraviolet light wreaks havoc on polymers by breaking chemical bonds within the molecular structure of the material. The result is varied depending on the polymer in question. Nearly all polymers used in optical fiber will turn yellow, brown, or dark red when exposed to UV over time. Even the very best fiber will eventually turn color after just a few months of spring/summer exposure if exposed to the sun without protection (we know, we make it and we test it). So outdoor installations simply have to be protected from UV exposure, there is no way around it. No one, not even us will warrantee their product for unprotected outdoor use. UV from daylight is not the only source of these problematic wavelengths. Nearly every available light source used for lighting fiber will produce some UV energy. This is even true of quartz-halogen lamps, and especially true of metal-halide lamps. It is crucial that the illuminator manufacturer provide means for reducing UV to negligible levels before the light is launched into the fiber. And one more point- UV light is scattered all over the sky and can still be a problem even if the fiber is shielded from direct sunlight. UV light also is strongly reflected by water, so pools and spas need more than a simple overhang to effectively protect the fiber.

BEND RADIUS LIMITATIONS

Bending too tight (8 times the fiber diameter is the limit!), kinking, repeated tight radii flexing, stepping on, placing heavy (20lbs+) objects on, and localized heating of the fiber can destroy the core clad interface quality and so reduce transmission. Our fiber is durable, it will take a lot of punishment of certain kinds, but as with everything,, care and attention must be paid for proper results. Firm yet gentle fixturing and tracking is the rule, use sweeping elbows in conduit runs, don�t run the fiber over hot water pipes without insulation, and make sure that the uncoiling is done carefully.

Source:www.fiberopticproducts.com/Specs.htm