Saturday, December 19, 2009

How to find the url of an image or picture on the internet?

First we must find an image, logo, or picture that we want to find the url for.
Then simply hover your mouse over the image and then right click on your mouse.
A menu will appear. Go to the bottom of the menu and choose properties. See the screenshot below.

The properties for that image or picture will now appear.
See in the screenshot below, See where it says Address (URL).
Select this address by making it blue.

Then right click your mouse on the blue selection and a menu will appear.
Choose copy.
Now you can paste this url where ever you want.


Source:www.tips4pc.com/Internet-tips/how_to_find_the_url_of_an_image.htm

How to find the url of an image or picture on the internet?

First we must find an image, logo, or picture that we want to find the url for.
Then simply hover your mouse over the image and then right click on your mouse.
A menu will appear. Go to the bottom of the menu and choose properties. See the screenshot below.

The properties for that image or picture will now appear.
See in the screenshot below, See where it says Address (URL).
Select this address by making it blue.

Then right click your mouse on the blue selection and a menu will appear.
Choose copy.
Now you can paste this url where ever you want.


Source:www.tips4pc.com/Internet-tips/how_to_find_the_url_of_an_image.htm

How to make headings and titles in a word 2007 document?

I do not know about you, but I like documents with clear headings or subheadings that label sections in a document. It makes it easier to see sections that interest you and it defines the area you are reading. In Microsoft word 2007 it has been made so easy to format text into headings and titles throughout your document.
Here's how to make a normal word in a document into a heading or title:
Open your Microsoft Word 2007 document.
Make sure you are on the Home tab or ribbon.
Select the text that you wish to make into a heading by making it blue.
Under the heading styles (see arrow in screenshot) you will see what style your text is.
My text that I have selected is Normal, however I want to make it a heading.
To change the normal text to a heading, simply choose another style from the list.
To see more choices of styles for your heading click on the arrow (circled in the screenshot above).
Now when you mouse over the styles your text immediately changes to give you a preview.
You You can change the individual style for headings, titles etc by pressing on the change styles button. You can change font color, size etc. So for example if you do not like the look of the heading 1 style you can change its attributes.
can make your text a title, subtitle, heading 1 and so on..

Source: www.tips4pc.com/Word-2007/how_to_make_headings_and_titles.htm

How to make headings and titles in a word 2007 document?

I do not know about you, but I like documents with clear headings or subheadings that label sections in a document. It makes it easier to see sections that interest you and it defines the area you are reading. In Microsoft word 2007 it has been made so easy to format text into headings and titles throughout your document.
Here's how to make a normal word in a document into a heading or title:
Open your Microsoft Word 2007 document.
Make sure you are on the Home tab or ribbon.
Select the text that you wish to make into a heading by making it blue.
Under the heading styles (see arrow in screenshot) you will see what style your text is.
My text that I have selected is Normal, however I want to make it a heading.
To change the normal text to a heading, simply choose another style from the list.
To see more choices of styles for your heading click on the arrow (circled in the screenshot above).
Now when you mouse over the styles your text immediately changes to give you a preview.
You You can change the individual style for headings, titles etc by pressing on the change styles button. You can change font color, size etc. So for example if you do not like the look of the heading 1 style you can change its attributes.
can make your text a title, subtitle, heading 1 and so on..

Source: www.tips4pc.com/Word-2007/how_to_make_headings_and_titles.htm

Whats the difference between a virus, spyware, Malware, and adware?

Now days the average computer user must be aware of the potential threats their computer faces each time you connect to the world wide web. It is a dangerous place for a computer and the security threats are growing everyday. It all so confusing, viruses, spyware, malware, and adware, just to name a few. Why are they all so different?

In the old days we used to call everything a virus, however now days we have more precise names to further categorize them. Below I will show you the difference between a virus, spyware, malware, and adware.

What is Malware?
Malware is a software program that has bad intentions. It can either be installed by the computer user accidentally or it can sneak into your computer through various avenues. Its not the same as a piece of software that by chance causes harm to your computer, malware is software that has been developed with the intent of causing problems with your computer.

What is Spyware?
Spyware is a type of malware program that invades your computer and basically spies on you. There are different types of spyware that collect different information. A common spyware type is a keylogger which records keystrokes typed on your keyboard. This is how people lose their bank account details. Other spyware will record your actions and browsing habits on the internet. Any information collected by spyware is usually with the intent to sell.

What is Adware?
Adware is another form of malware and is exactly as the name suggests, software with advertising. Adware can be downloaded and sometimes included in free programs. For example Windows Live messenger and Yahoo messenger contain adware. Although some programs give the option not to install the extra adware, others seem to sneak it in without permission.

What is Virus?
A virus is a small program designed to infect your computer and cause errors, computer crashes, and even destroy your computer hardware. Unlike spyware, a virus can grow and replicate itself. It can also travel from one computer to another via an internet connection. Of course you can get viruses from discs with virus infested files stored on them, however the internet is the most common entry point. Some common symptoms of a virus are emails being sent to all contacts when you didn't send them, being taken to webpage's that you didn't choose, or being told you have a virus and to download a program to fix it.

Source:www.tips4pc.com/Computer_security/whats_the_difference_between_a_v.htm

How does a Reserve price work when should you set one on Ebay?

A reserve price is the absolute lowest price that you want to get for your item. You may sell the item below the reserve price, but you will not be required to. In other words, if you set your reserve price at $100, and your highest bid is only $50, you do not have to sell the item to the highest bidder. You can close the auction without any negative feedback or repercussions. Set your reserve price at the absolute lowest price you are willing to sell your item for. Keep what the item is worth, as well as what it cost you in mind.



Source:www.tips4pc.com/Ebay-tips/how_to_price_your_ebay_items.htm

How does a Buy It Now price work on Ebay?

The ‘buy it now’ option will allow you to set a price, and buyers can buy the item immediately, without bidding, for that set price. This option can be used for any type of item, and it should be set to match your reserve, give or take a few dollars. This option is great if you have multiple identical items to sell. Shipping has a price, and potential buyers take this into consideration when they are looking at an auction. If you can see your way clear to offer free shipping, you will find that people are placing more bids. Make sure that your potential buyers realize that you are offering free shipping! Before setting any prices, you need to determine what the item is really worth. The value of the item in different markets might be quite high. However, you are selling on eBay, and it is a different world altogether! Find out the price that similar items sold for on eBay before setting any prices. If it is a collectable, or a high ticket item, have the item evaluated to ensure that you aren’t going to lose money!

Source:www.tips4pc.com/Ebay-tips/how_to_price_your_ebay_items.htm

What is a starting price on Ebay and when should I set one?

The starting bid price is fairly simple – never set it higher than $50 or so – no matter what your item is really worth. This low opening price will get bidders to your auction. Setting low starting bid prices creates the need for a reserve price.



Source:www.tips4pc.com/Ebay-tips/how_to_price_your_ebay_items.htm

What are the three different methods for setting a price for your items on Ebay?

There are only three prices that can be set for an eBay auction: the ‘buy it now’ price, the reserve price, and the starting bid price. Of these three, the starting bid price is the only one that is required for an eBay auction. The reserve price and the ‘buy it now’ price are optional.



Source: www.tips4pc.com/Ebay-tips/how_to_price_your_ebay_items.htm

How to Price Your ebay Items

When new sellers start an Ebay Auction businesses and start to sell items on Ebay, most do not realize how important the pricing of their items is. Many sellers are not even aware that pricing needs to be done as Ebay is an auction site after all. Usually in an auction situation you simply sell the item to the highest bidder, however Ebay runs their auctions a little different. In the real world and auctioneer waits for an opening bid, however much that might be, whereas Ebay lets you set a starting price to begin from.

In fact a great deal of work goes into deciding and setting prices for items on Ebay auction listings and essentially, there are three different price categories.

Source: www.tips4pc.com/Ebay-tips/how_to_price_your_ebay_items.htm

Sunday, December 13, 2009

MS Office applications

Word
Microsoft Word Tutorial
A definitive, step-by-step guide on the basic features of Microsoft Word. This online tutorial is great for those new to the world of computers. It shows you how to do everything from saving and printing documents to adding more stylish and personal touches to your work.

Excel
NO erasers! NO new formulas! NO calculators! Learn how to use Excel with these helpful tutorials, and spreadsheets will never scare you again:

•Basic Excel Tutorial
•Microsoft Excel 2002 Tutorial
•Microsoft Excel Tutorial
PowerPoint
Microsoft PowerPoint 2003
This course of video tutorials shows you how to create presentations, format text, add links, images, animation and media clips to bring your slides to life.

These tutorials covers the basics of creating a PowerPoint presentation, the drawing toolbar, how to apply a design, colour schemes, transitions and much more.

•Technology for Teachers PowerPoint Tutorial
•PowerPoint 2000 Tutorials
•Microsoft PowerPoint Tutorial

Source:www.ltscotland.org.uk/ictineducation/ictadvice/onlinetutorials/index.asp#basic

VoiceXML and CCXML

Voice Extensible Markup Language (VoiceXML) is an open standard extensible markup language that was developed to fulfill the increasing demand to create audio-based applications using open source standards. The main use of VoiceXML is the creation of interactive voice response (IVR) and automated speech recognition (ASR) applications using a web based model to retrieve content, manage voice services, and access speech engines and services within the network. VoiceXML based applications have proven to be at least 3 times faster in terms of application development as compared to traditional IVR proprietary tools (14-18 months vs 6-9 months to deployment with VXML), but most importantly, a VoiceXML written application can easily move from one VXML platform to another with minimal downtime.

Call Control is an important part for voice services as it allows manipulating voice conversations by bridging two users together or by separating them. Furthermore, it allows placing a user onto a dialog system, such as a VXML service. Enabling all these services can be achieved by the call control language, CCXML.

The Vision VoiceXML Server simplifies the technically challenging task of building interactive voice and video response applications by providing the key elements (e.g; VXML and CCXML) for rapid development of robust and dynamic applications that involve complex media processes. The Vision VoiceXML Server supports the VoiceXML language standard, as well as CCXML to control inbound and outbound dialing, call transfers, and conferencing.


Source:www.nmscommunications.com/DevPlatforms/Technologies/VoiceXMLandCCXML/default.htm

SIP and VoIP

Voice over Internet Protocol (VoIP) is bringing about significant changes in the telecommunications industry with innovative services enabled by flexible and efficient packet transport. The convergence of IP networks and the PSTN has also led to a growing demand for circuit-based technology that is VoIP-ready.

Open Access Boards and Software
At the heart of today’s VoIP networks is the Session Initiation Protocol, which is increasingly used for interworking functions between network services, as well as for signalling between devices. NMS offers an extensive set of board and software components and voice coding technologies for developers designing SIP-based VoIP solutions.

The SIP API for Natural Access, based on the popular Natural Call Control API, allows developers to create applications that can easily operate in either the PSTN or VoIP domains, or both simultaneously.
The CG Series of boards offers scalable, high-performance development platforms for converged PSTN and IP telephony solutions, designed to meet the connectivity, flexibility, and performance requirements of new applications such as VoIP gateways and IP media servers.
PacketMedia HMP gives developers a comprehensive software-only solution on a standard x86-architecture computer for creating a wide range of powerful IP media server applications ranging from announcement servers to voicemail and more.
Vision Media Gateway
At the system level, NMS offers the Vision Media Gateway for network equipment providers, system integrators, and application developers seeking to connect their IP-based enhanced services to both PSTN and IP networks. The off-the-shelf Vision Media Gateway provides the interface between the PSTN and SIP-based service platforms, supporting applications such as network announcements, messaging, conferencing, self-service, voice portals, call centers, IP and mobile Centrex, and more.

The Vision Media Gateway provides PSTN network interfaces and signaling, as well as fully integrating gateway and call routing functions, such as splitting VoIP streams for ASR engines or IP call agents. The Media Gateway also includes a scriptable call routing engine that can perform call routing, eliminating the need for a separate media gateway controller or application server for simple applications.


Source:www.nmscommunications.com/DevPlatforms/Technologies/SIPandVoIP/default.htm

SS7

Signaling System 7 (SS7), also known as C7, is the carrier signaling protocol used for call control and providing Intelligent Networking (IN) and Advanced IN (AIN) services. SS7 is often used for applications such as IP signaling gateways, wireless infrastructure, and a wide variety of in-network enhanced services, including voice and fax messaging, one-number/follow-me, number portability, and pre-paid services.

SS7 Boards and Software
Open Access SS7 hardware and software platforms provide developers and OEMs with a new level of call control and message redundancy for high availability options in the most demanding in-network applications. Our SS7 solution supports point code redundancy for true telco-grade high-availability applications with full chassis-level redundancy or board-level redundancy within a single chassis.

Our integrated SS7 protocol stack, working in combination with our TX Series hardware, offers switch-specific and high-availability extensions that meet worldwide telecom requirements. Our boards support ISUP, TUP, SCCP, TCAP, and either MTP layers 1, 2, and 3 for TDM connectivity or IP, SCTP, and M3UA (SIGTRAN) for IP network connectivity. These software stacks run on-board, freeing the host computer for applications-related activities. Protocols have been tested against the ETSI, ITU-T, and ANSI standards used in the major telephony markets and enable applications to interoperate with all major CO switches.


Source:www.nmscommunications.com/DevPlatforms/Technologies/SS7/default.htm

IMS

The IP Multimedia Subsystem (IMS) is a standardized IP-based architecture that will allow the convergence of fixed and mobile communication devices, multiple network types, and multimedia applications. Using IMS, future applications will combine voice, text, pictures, and video in seamless call sessions, offering significant ease of use to subscribers and allowing service providers to drive branding through a common interface, while substantially reducing operating costs.

NMS offers decades of experience in the media processing and signaling space, SIP enabled handset technologies and web based development environments to developers and NEPs who want to participate in the IMS era. Today's media rich applications use products from NMS's Vision family and MG 7000A AdvancedTCA media processing blade, which will evolve with the latest IMS standards to enable end-to-end solutions for IMS based environments.


Source:www.nmscommunications.com/DevPlatforms/Technologies/IMS/default.htm

AdvancedTCA (ATCA)

The Advanced Telecommunications Computing Architecture (AdvancedTCA or ATCA) is an open industry specification for building high-performance telecommunications and data communications systems. Developed by the PICMG consortium, AdvancedTCA is now poised to make considerable inroads into a market that has traditionally been dominated by vertically integrated proprietary systems.

The PICMG 3.0 Series of specifications for AdvancedTCA incorporates the latest trends in high-speed interconnect technologies, next-generation processors, and improved reliability, manageability, and serviceability. Specifically, the AdvancedTCA architecture:

• Meets the evolving needs of the communications network infrastructure
• Provides functionality required to implement rugged, highly available, network-quality systems
• Supports wireless, wireline, and optical network elements
• Provides high levels of service availability (99.999% or more) for the central office environment
• Offers scalable performance and capacity
• Reduces development time and total cost of ownership
• Features an open architecture and modular COTS components, sourced by a dynamic, interoperable, multi-vendor market

NMS and AdvancedTCA
For more than 20 years, NMS has been a leader in technology innovation and standardization, including AdvancedTCA. NMS has been actively involved in the development of the specification and in subsequent industry interoperability events. Building on this experience, NMS has shipped the first commercially available media processing blade — the MG 7000A. Featuring a powerful combination of high-speed IP packet handling, four Gigabit Ethernet interfaces, high-density DSP voice media processing power, and optional T1/E1/J1 interfaces, the MG 7000A is the perfect choice for a wide range of network-based applications including IP media servers and enhanced service platforms. Follow the links in the Related Information section below to learn more about the MG 7000A and AdvancedTCA.


Source:www.nmscommunications.com/DevPlatforms/Technologies/ATCA/default.htm

3G-324M and IP Video

In the 3G mobile video market, both the de facto and the industry standard for communications between a mobile video-enabled handset and applications residing in the network are the same, namely the 3G-324M umbrella standard. 3G-324M, defined by the 3GPP (3rd Generation Partnership Project), comprises several sub protocols that provide the basis for creating applications, ensuring interoperability, and providing connectivity.

NMS Open Access framework provides the ideal enabling technology to meet the ever-increasing demand for scalable, cost-effective, mobile video solutions. Specifically, our Video Access products with the Software Video Transcoder facilitate a wide range of powerful video applications ranging from 3G-324M wireless video gateways to video messaging and streaming servers to manage video media adaptation within the network.

NMS Vision VoiceXML Server is also 3G enabled; included with the Vision VoiceXML server are extensions to the VXML language that support.3gp files.

Pioneering the field, NMS is a world leader in video-enabling technology for deployments by major wireless carriers in Asia and throughout the world. NMS actively promotes mobile video through industry speaking engagements and through our work with the International Multimedia Telecommunications Consortium (IMTC).



Source:www.nmscommunications.com/DevPlatforms/Technologies/3G324MVideo/default.htm

1.07 Analog Circuits

Telephones transmit information over copper wires using voltage
Voltage is a representation or analog of speaker’s voice
“Analog” circuit


The technique for representing information on an ordinary local loop is called analog. This term is often thrown about with little regard for its actual meaning, so we’ll spend a bit of time understanding of what is meant by “analog”.
The term analog comes from the design of the telephone. A microphone in the telephone handset is placed in the path of the sound pressure waves coming out of the speaker’s throat. As the sound pressure waves hit the microphone, they change its electrical characteristics. We use the fact that the electrical characteristics of the microphone change as the sound pressure waves hit it to make a voltage on the telephone wires change.
This voltage is a representation or analog of the sound pressure waves. This is all we mean by analog: representation. The voltage on the wires is an analog of the sound pressure waves coming out of the speaker’s throat.
People then stretch the terminology to call the two copper wires which form the telephone line an analog circuit, which is not very accurate. The only thing analog in this story is the method for representing information on the copper wires. We can use digital techniques on the same wires.


Source:www.telecommunications-tutorials.com/tutorial-analog-circuits.htm

1.05 The Public Switched Telephone Network

Many communication technologies are based on those used in the Public Switched Telephone Network (PSTN), so regardless of whether you're interested in voice, data or networking, it is important to have an understanding of the structure and operation of the telephone network.
We begin with a basic model for the telephone network and will build on it in subsequent discussions. At the top of the diagram, we have a telephone and a telephone switch. The telephone is located in a building called a Customer Premise (CP), and the telephone switch is located in a building called a Central Office (CO). One could refer to the telephone as Customer Premise Equipment (CPE).
The telephone is connected to the telephone switch with two copper wires, often called a local loop or a subscriber loop, or simply a loop. This a dedicated access circuit from the customer premise into the network. We usually have the same arrangement at the other end, with the far-end telephone in a different customer premise and the far-end telephone switch usually in a different central office.
Copper is a good conductor of electricity - but not perfect: it has some resistance to the flow of electricity through it. Because of this, the signals on the loop diminish in intensity or attenuate with distance, and if the loop were too long, you wouldn't be able to hear the other person. The maximum resistance allowed is usually 1300 ohms, which works out to about 18,000 feet or 18 kft, which is 3 miles or 5 km on standard-thickness 26-gauge cable, but could be as long as 14 miles or 22 km on thicker 19-gauge cable. Thus, COs traditionally had a serving area of three miles radius around them, about 27 square miles or 75 km2. With suburban sprawl, we can't build COs every five miles, so in practice, new subdivisions are served from remote switches, which are low-capacity switches in small huts or underground controlled environment vaults. The remote provides telephone service locally on the loops in the subdivision. The remote and the loops are connected back to the nearest CO via a loop carrier system that uses fiber or radio.
Telephone switches are connected with trunks. While subscriber loops are dedicated access circuits, trunks are shared connections between COs. To establish a connection between one customer premise and another, the desired network address (telephone number) is signaled to the network (to the CO switch or remote) over the loop, then the switch seizes an unused trunk circuit going in the correct direction and the connects the loop to that trunk - for the duration of the call. When one end or the other hangs up, the trunk is released for someone else to connect between those two COs. This method for sharing the trunks is known as circuit switching. It was called dial-up when telephones had rotary dials. It is important to note that even though today there may be digital switching and digital transmission, the last 3 mi. / 5 km of the network, the subscriber loop, most often still has its original characteristics, which date back to the late 1800s (!).
Voice and data equipment which connects to the PSTN over regular telephone lines must work within the characteristics of the local loop, so an understanding of the characteristics and limitations of the local loop is essential.


Source:www.telecommunications-tutorials.com/tutorial-PSTN.htm

Location of Radio Sources

Many services in future mobile radio systems like E-911, location sensitive billing, cellular system design, resource management, fleet management and intelligent transportation systems need information on the location of the mobile terminals. Consequently, an important requirement for the physical layer of future mobile radio systems is the ability to deliver precise location information. State of the art solutions based on the measurement of channel attenuation or access delay cannot fulfil the stringent performance requirements of future applications especially in typical mobile radio scenarios with multipath propagation. Even satellite navigation systems like GPS do not fulfil the requirements as they do not work inside of buildings. The Research Group for Radio Communications investigates novel database based location techniques. Furthermore, hybrid location techniques, which combine several of the mentioned techniques, are taken into consideration.


Source:www.eit.uni-kl.de/baier/Research/research.htm

Multi-Static and Synthetic Aperture Radar System

In co-operation with research institutes such as DLR and FGAN problems of multi-static radar systems for the location of objects moving on the ground and of high resolution SAR radar systems are considered.


Source:www.eit.uni-kl.de/baier/Research/research.htm

System Modeling and Simulation by ML-Designer

For the design and optimization of radio communications systems modeling and simulation tools are required. As such a tool the Research Group for Radio Communications adapted ML-Designer. For the system concept JOINT mentioned above a complete simulation chain was implemented on ML-Designer and utilized for extensive system simulations. Important functional modules which had to be established in this work concern JD, JT and JCE. The generated JOINT simulation chain can serve as a reference model for the demonstration of the features and performance of ML-Designer.


Source:www.eit.uni-kl.de/baier/Research/research.htm

Friday, December 11, 2009

Interference and Co-Existence in Mobile Radio Scenarios

mobile radio communications, often several cellular networks of different operators have to co-exist in the same geographic area. Although the networks of the operators nominally use different frequency bands, interference can occur due to intermodulation, blocking and oscillator noise. The impact of such interferences heavily depends on the system parameters and the scenario, in which the networks are deployed. The interference is most severe, when base stations of two different operators are dislocated. Then, a mobile station MSB far from its serving base station BSB using very high transmit power can be very close to a base station BSA of the other operator A, thus creating very high interference. Similar co-existence situations can occur between different systems which are adjacent in frequency, such as the UMTS TDD and FDD modes in Europe. The main objectives of the research project are to categorize the different interference types, choose suitable measures to evaluate the interactions between the networks, identify critical constellations and parameters, and develop methods to combat interference between different networks for critical cases.



Source:www.eit.uni-kl.de/baier/Research/research.htm

Channel Estimation

Although 3rd generation mobile radio systems have not yet become widely operational, research concerning beyond 3G mobile radio systems has already started around the globe. At the Research Group for Radio Communications the system architecture with the acronym JOINT (Joint Detection and Transmission Integrated Network) is developed. JOINT is a contribution to the project “Joint Research Beyond 3G” (JRB3G) sponsored by the SIEMENS Company, Germany, in which German and Chinese universities are involved. JOINT is a service area (SA) based concept. Each service area consists of access points (APs) receiving/transmitting radio signals from/to the mobile terminals (MTs) active in the SA. Each SA is equipped with a central unit (CU), which is responsible for signal processing. Joint Detection (JD) is applied for uplink data detection and Joint Transmission (JT) is used for downlink data transmission. Both techniques rely on the knowledge of the mobile radio channel, which is gained by the technique termed Joint Channel Estimation (JCE). JCE is a pilot-aided channel estimation technique. The active MTs transmit known symbols - termed pilots – to the APs, and based on these symbols and on the corresponding received signals, estimates of all radio channels between each MT and AP are obtained at the CU. Exploiting the potential and evaluating the performance of JCE in JOINT is the task of this work.


Source:www.eit.uni-kl.de/baier/Research/research.htm

MIMO Systems

In MIMO (Multiple-Input-Multiple-Output) systems, multiple antenna elements are placed at both the access point (AP) and the mobile terminals (MTs). Therefore, a spatial dimension is introduced at both the AP and MTs, which gives large potential to increase the system capacity. Basically, MIMO systems can be classified in single-user MIMO systems, where the AP communicates with only a single MT, and multi-user MIMO systems, where AP communicates simultaneously with several MTs.

The focus of this project is to

study the basic properties of single-user MIMO systems, e.g. the distribution of eigenvalues of the MIMO channel,
study the basic properties of multi-user MIMO systems, e.g. the cross-correlation between eigenmodes of MIMO channels between AP and MTs, and
develop novel base band signal processing techniques for multi-user MIMO systems, especially for downlink communication.


Source:www.eit.uni-kl.de/baier/Research/research.htm

Beyond 3G Mobile Radio Systems (JOINT)

At the Research Group for Radio Communications a system concept termed JOINT (Joint Transmission and Detection Integrated Network) for beyond 3G mobile radio systems is being developed. JOINT applies adaptive antenna techniques as well as the joint transmission (JT) and joint detection (JD) techniques. The main goals are to increase capacity compared to 3G cellular systems and WLAN systems as for instance IEEE 802.11, and to keep the complexity of the mobile terminals at a low level. In the following, important characteristics of JOINT are listed:

The entire area intended for mobile radio services is subdivided into service areas (SA). The SAs can be adjacent to each other or isolated. One SA can represent a building, a floor of a building or even a part of an inner city. Thus, a SA corresponds to a number of cells of a common pico- or micro-cellular system. Since the above mentioned JD and JT techniques can eliminate the interference within a SA, the advantage of cancelling intercell interference is obtained by the introduction of the SA instead of cells.
Within each SA numerous mobile terminals (MT) can be active.
On the fixed network side multiple access points (APs) per SA, which are connected to a central unit (CU), will be used. The central unit is connected to the core network.
Each AP and each MT applies an antenna consisting of a single antenna element or an array of antenna elements.
The communication of a MT with the CU occurs over several APs simultaneously.
The MTs are kept simple, and the necessary processing efforts are concentrated in the CU.
Multicarrier transmission using orthogonal carriers will be applied. The option of combining the multicarrier transmission with the multiple access schemes TDMA, FDMA and CDMA is given.
TDD is the duplexing scheme in favor.
In the uplink JD is used. This means that the received signals at the antennas of the APs are jointly processed in the CU. The goal is to eliminate multiple access interference (MAI) originating inside the SAs.
In the downlink JT is used. The antennas of the APs transmit signals, which are structured in such a way that each MTs of a SA obtains the corresponding signal practically free from any MAI by applying a simple receiver structure.
For channel estimation the scheme joint channel estimation (JCE) is proposed.


Source:www.eit.uni-kl.de/baier/Research/research.htm

Transmit Array Processing for CDMA Downlinks

It is expected that in the near future the demand for high data rates will dramatically increase especially in the downlink of mobile radio systems. This expectation is currently being reflected by the lively high speed downlink packet access (HSDPA) standardization efforts within 3GPP. CDMA mobile radio standards, e.g., the UTRA-FDD mode, are prepared for such requirements by means of orthogonal variable spreading factor (OVSF) codes allowing the coexistence of high and low data rate users. To achieve high data rates, low spreading factors have to be utilized, which goes along with a massive impact of interference and noise degrading the system performance. On the part of the transmitting base stations, high downlink data
rates may be enabled by utilizing adaptive multi-element transmit antennas. For the efficient operation of such antennas information about the downlink radio channels is required at the base stations. The 3G partial standard WCDMA designed for the Frequency Division Duplexing (FDD) radio frequency bands suffers from the inherent disadvantage that, due to the frequency gap between uplink and downlink, the results of the uplink channel estimation cannot be directly used as the channel information required for adjusting the multi-element transmit antennas. In the FDD case, two general basic approaches to obtain knowledge about the spatial properties of the downlink channels exist, namely either exploiting the spatial properties of the uplink channels, or feedback downlink channel information from the mobile stations to the supplying base stations. The first approach is based on the assumption that the directional properties and the attenuations of the respective radio propagation paths of the uplink channels and the corresponding downlink channels are equal. If this assumption holds, methods for an at least sub-optimum adjustment of the downlink transmit antenna weights based on uplink channel estimates exist, which do not rely on complex DOA estimation techniques. The comparison of the performances achievable by uplink channel estimation based techniques with those achievable if actual downlink channel information is available shows the superiority of techniques which directly exploit information about the spatial downlink channels. The only way to obtain information about the spatial downlink channels at the base station is via feedback from the mobile stations which leads to the development of efficient feedback schemes.

If among the mobile stations served by the same base station high and low data rate links have to coexist, mobile station specific minimum signal to noise and interference (SINR) constraints have to be fulfilled. Thus, transmit array processing and power control techniques should explicitly operate to support these demands. It is not possible to optimize transmit array processing and the adjustment of transmit powers in seperate steps. Iterative methods may be exploited to fulfill the mobile station specific SINR constraints and at the same time to keep the total transmit power as low as possible in order to reduce intercell interference.


Source:www.eit.uni-kl.de/baier/Research/research.htm

Joint Processing of received Signals in CDMA Mobile Radio Systems with Infinite and Quasi Infinite Data Transmission

Mobile radio systems are interference limited. Therefore, in future mobile radio systems a significant performance enhancement can only be achieved by utilizing techniques which reduce the degrading impact of interference. A highly attractive class of such techniques are techniques for joint received signal processing. To date the systematic design and analysis of such techniques for CDMA mobile radio systems with infinite or quasi-infinite data transmission is not yet well understood, even though such systems are to be a particularly interesting class of future mobile radio systems, see for instance the third generation mobile radio systems currently put into operation. This project contributes to the systematization of the design and optimization of techniques for joint received signal processing in such mobile radio systems. It is shown that the task of joint received signal processing can logically be split into the five subtasks block establishment, data assignment, interblock signal processing, intrablock signal processing and combining + decision. Besides to the exact definition of these five subtasks, the main focus of the project is the development of attractive proposals for suboptimal and, according to certain criteria, optimal solutions of these subtasks.


Source:www.eit.uni-kl.de/baier/Research/research.htm

Interference Reduction in CDMA Mobile Radio Systems

One of the most important performance limiting factors in CDMA mobile radio systems is the multiple access interference. Consequently, some of the proposals for third generation mobile radio systems as for instance the Chinese TD-SCDMA already incorporate simple joint detection techniques like linear zero forcing estimation. However, these linear joint detection techniques are rather complex and can only be applied in systems with low numbers of simultaneously active users, i.e., CDMA systems with an additional TDMA component. Furthermore, the performance of linear joint detection especially in scenarios with high system loads, i.e., numbers of users close to the spreading factor, is unsatisfactory. At the Research Group for Radio Communications advanced joint detection techniques which combine low complexity and superior performance are developed. These advanced joint detection techniques rely on the turbo principle. The FEC code typically used in mobile radio systems and the CDMA spreading are considered as a serially concatenated code. A low complexity high performance decoder for such serially concatenated codes consists of two decoders alternately decoding the two codes and exchanging extrinsic information between the two decoders. A generalized detector architecture, which includes turbo detectors as well as conventional joint detectors like the zero forcing estimator or parallel interference cancellation is designed. By using sliding window techniques the above mentioned advanced joint detection techniques can also be applied in CDMA systems without a TDMA component, e.g. W-CDMA.


Source:www.eit.uni-kl.de/baier/Research/research.htm

Air Interface Design by Transmitter Orientation and Receiver Orientation

transmission schemes are transmitter oriented, which means that the transmitter algorithms are a priori given, whereas the algorithms to be used at the receivers have to be a posteriori adapted under consideration of channel state information. In contrast to trans­mitter orientation, in receiver oriented systems the receiver algorithms are a priori given, and the transmitter algorithms, again under consideration of channel state information, have to be a posteriori adapted correspondingly. Recently, receiver oriented schemes have been proposed as promising approaches for mobile radio downlinks which utilize the duplexing scheme TDD. In such applications the rationale receiver orientation may offer the following advantages:

The a priori determined receiver algorithms can be chosen with a view to arrive at particularly simple receiver structures. In this way, as compared to transmitter oriented systems, complexity can be transferred from the MTs to the AP.

Channel information is only required at the AP. Therefore, no downlink transmission resources have to be sacrificed for training signals, and no channel estimators are required at the MTs.

The project has the focus to illuminate the basic commonalities and differences between the two rationales transmitter orientation and receiver orientation. Based on the gained results proposals for future air interfaces should be developed.


Source:www.eit.uni-kl.de/baier/Research/research.htm

Hypertext Semantics

How does the medium of hypertext, including both its forms and the practices, artifacts, and technologies which comprise it, dispose towards principles of organization and sequencing of information and meaning, in the medium itself as object-text, and in the traversal practices of the user, as meaning-text? What are the currently typical organizations of meaning and their interpretations by users? What are alternative strategies for meaning-development, both in authoring and in using hypertexts? How do existing and potential meaning organizations differ from those of more linear text media? How do the principles of intertextuality and co-text contextualization translate in the hypertext medium? What are the analogues of multivariate structural organization and co-variate textual-cohesive organization? How are text units constructed so as to facilitate construal of semantic ties between linked units?


Source:academic.brooklyn.cuny.edu/education/jlemke/phd-tops.htm

Ecosocial Dynamics of Self-organizing System-Networks

How do the dynamics of ecosystems including human semiotic artifacts and practices differ systematically in their dynamic potential from simpler ecosystems lacking (or with only much simpler) semiotic bases for couplings among system-constitutive processes? How does the role of semiotic artifacts and semiotic-material practices in such systems break the separability of scales typical of dynamical systems without semiotic mediation? What are the general consequences of interpenetration and overlapping (i.e. dynamical interdependence) of processes at very different temporal and spatial scales for the analysis of human meaning systems, specific events, and the time-development and evolution of ecosocial systems and system-types?




Source:academic.brooklyn.cuny.edu/education/jlemke/phd-tops.htm

Semiotics of Action

How are the actions that are conventional and meaningful in various typical activity contexts organized as a semiotic resource system? How do they make typological meaning by paradigmatic contrast? by syntagmatic catenation? How do they make topological meaning by their pacing, intensity, and other dimensions of gradable degree of performance? What principles are shared by the linguistic, depictional, gestural, and actional semiotic resources systems and their typical deployment practices and products?


Source:academic.brooklyn.cuny.edu/education/jlemke/phd-tops.htm

Presentational, Orientational, and Organizational Meaning

How do ideational, world-representational (Presentational); social-interactional, attitudinal (Orientational); and cohesive, structural (Organizational) meaning dimensions of signs-in-use integrate with and mutual contextualize one another? How does this occur (ideally) within a single semiotic resources system (e.g. language, gesture, depiction) and (actually) among multiple co-deployed semiotic systems? What are the genre conventions in different situated-use activities for such relations?


Source:academic.brooklyn.cuny.edu/education/jlemke/phd-tops.htm

Mathematics as a Semiotic Resource System

How is the relationship of mathematics to natural language most usefully characterized? How, historically, did mathematical registers of natural languages and mathematical symbolisms evolve and diverge from spoken and written verbal texts? What were the original functional specializations of mathematical registers and symbolic expressions? how were they integrated in use with verbal reasoning and exposition and with visual-graphical representations? What are the range of typical relations among these resources today, and what additional possibilities remain to be tried? What kinds of meanings are made better with mathematical registers and symbolic systems than with standard verbal language or graphical representations alone? What kinds of meanings and meaning dynamics are characteristic of the integrated combination of mathematics, verbal language, and graphical representations?


Source:academic.brooklyn.cuny.edu/education/jlemke/phd-tops.htm

Typological and Topological Meaning

How are various systems of semiotic resources (e.g. language, depiction, gesture) specialized for the construction of category-contrastive, or typological, meanings vs. continuous-variation, or topological meanings? How does each system provide resources for each broad type of meaning? How are the typological and topological resources integrated in the (idealized) use of a single system, and in the (actual) use of multiple, integrated semiotic resource systems? How historically have meanings of each of these broad types influenced meanings of the other kind, and how have semiotic resource systems more specialized toward one been typically integrated with those more specialized toward the other? How are topological meanings typically coded by signs and signifying actions? what alternatives are available to the contrastive valeur-principle for typological semiosis?


Source:academic.brooklyn.cuny.edu/education/jlemke/phd-tops.htm

Multimedia Semiotics

How do verbal, visual, auditory, and tactile-interactional semiotic resources combine and integrate in multimedia productions, systems, and events? How are these resource system co-evolved to integrate with one another? What are the historical traditions in various cultures by which they are conventionally linked or integrated? What kinds of differences in meaning-making typically occur for resources of each type in the context of co-deployment of resources of the other modalities? What possible cross-multiplications of meaning-potential among these systems have not yet been realized by existing or historical systems in various cultures? How can those which have been realized be usefully hybridized for various contemporary projects and agendas?


Source:academic.brooklyn.cuny.edu/education/jlemke/phd-tops.htm

Computer-mediated Communication, Learning, and Education

Which combinations of the various modes of CMC are optimally supportive of learning for various kinds of people and various kinds of skills and knowledges (i.e. culturally recognized meaning-making practices and meaningful actions)? What are the relative learning and educational affordances of synchronous vs. asynchronous communication? of presentational vs. interactional modes? of densely interlinked hypertext vs. more linearly connected text? of nonverbal media in various forms of integration with verbal text? of static vs. animated images? of abstract vs. photorealistic images? of 2-dimensional vs. 3-dimensional forms? of fixed vs. mobile user apparent-viewpoint? of tactile force-feedback and full-presence virtual environments vs. passive-interactive and framed-view systems/experiences? What are the optimum combinations for various types of learners and mentors of face-to-face interaction, computer-mediated communication, and human-artifact-environment interaction without computational mediation? of passive-readable text, interactive text, peer communication, mentor communication, dyadic, and group social interaction?





Source:academic.brooklyn.cuny.edu/education/jlemke/phd-tops.htm

Human-Computer Interaction: Interfaces and Ideologies

Functional interfaces between users and computational systems present specific affordances for human-meaningful, i.e. semiotic interactions between user and system. Various semiotic resources, from verbal signs to visual patterns to meaning-laden actions by users and responses by the system (including apparent initiations by the system), are designed to mediate between human cultural systems and meaningful behavior patterns on the one hand and the underlying computational programs and hardware on the other. Such interfaces are a genre of multimedia, polysemiotic texts, not unlike films or videogames. As such, they embody the ideological dispositions of their creators, who, historically, have been far less culturally diverse than the human population as a whole. How do widely used interfaces today, their metaphors and cultural assumptions, reflect specifically masculinist, middle-class, eurocultural attitudes and assumptions? How do, or would, interfaces designed from other cultural, gender/sexuality, and class positions differ, and with what effects on the potential of computational systems to aid the full range of human projects and agendas?



Source:academic.brooklyn.cuny.edu/education/jlemke/phd-tops.htm

Computer-mediated Communication

What are the most significant differences in how humans interact and make meaning with printed texts, interactive computer media, and other humans in face-to-face encounters? Focus on the linguistic-semantic co-variates of differences in interactional and operational modes. Relevant for CMC and learning, CMC and education, and basic linguistic variation issues associated with mode of interaction. Note the role of visual cues, multimedia perception, physical interaction and interaction potential, bodily vulnerability, rate of information exchange. What are the most relevant factors for linguistic variation among texts produced in reading, writing, FTF dialogue, telephone conversation, chat room discussion, email, listgroup email, etc.? For what communicative and learning functions are the various interactional-communicative modes most and least effective? Consider informational, affective, and bodily dimensions of this issue.


Source:academic.brooklyn.cuny.edu/education/jlemke/phd-tops.htm

SOURCE CODING

Investigation of the design and theoretical performance of data compression algorithms. Study of universal lossy coding (where the source is unknown a priori), noisy channel quantization, noisy source quantization, empirical design techniques, and convergence rates of algorithms. Very low complexity systems for power-constrained real- time applications such as speech and image coding.


Source:comm.csl.uiuc.edu/research.html

MULTIDIMENSIONAL INFORMATION AND COMMUNICATION THEORY

Design of multidimensional communication waveforms for the two and three dimensional optical or magnetic recording channels. Capacity of multidimensional channels, two-dimensional data modulation codes and data transmission codes.


Source:comm.csl.uiuc.edu/research.html

THEORY AND APPLICATIONS OF ERROR-CONTROL CODES

Design of efficient decoding algorithms for error-control codes; hardware and software implementations of error-control coding systems. Applications of error-control codes in digital communication and storage systems. Design of Euclidean-space codes and decoders for these codes, based on the theory of lattices and sphere packings. Development of codes and algorithms based on algebraic geometry and commutative algebra. Block and tree codes for soft-decision channels.


Source:comm.csl.uiuc.edu/research.html

COMMUNICATION NETWORKS

Investigation of routing, congestion, and transmission control mechanisms for networks ranging from wireless networks (such as multihop packet radio networks, wireless local area networks, personal communications networks and cellular telephony networks) to high-speed guided media networks (such as fiber-optic ATM networks and multi- processor computer networks.) Performance of wireless networks under changes in traffic, interference, and connectivity. Interaction between channel measurements and network protocols; support for multimedia traffic. Methods to provide service guarantees for multiple classes of traffic in guided media networks. Interfaces between wireless and guided media networks. Tools used include stochastic modeling and analysis, optimization, computer simulations, estimation theory, combinatorics, and information theory.


Source:comm.csl.uiuc.edu/research.html

WIRELESS COMMUNICATION SYSTEMS

Design and analysis of robust communication systems that efficiently utilize radio bandwidth. Parallel and sequential methods for acquisition of synchronization, and the use of multiuser detection methods for interference suppression in direct-sequence code division multiple access (CDMA) systems. Adaptive interference suppression methods for acquisition and demodulation in direct-sequence CDMA systems. Adaptive coding, iterated decoding, and retransmission techniques in direct-sequence and frequency-hop CDMA systems. Design and performance analysis of adaptive mechanisms over channels with fading, multipath, and time-varying interference. Adaptive antenna arrays for interference suppression.


Source:comm.csl.uiuc.edu/research.html

Computer Architecture

Professor Johnson, Sohoni, Stine
The word architect is defined as one that plans or devises; one that designs something. A computer architect utilizes detailed knowledge of hardware and software to design computer systems. This includes the detailed design of components within the microprocessor as well as the various components that interact with the core processor. Architects create the blueprint of not only single CPU’s, but entire multiprocessor systems with their various interconnecting hardware.

Students with a background in computer architecture can apply this broad knowledge in a number of different specialties in the industry. Companies like AMD, Intel, and IBM have a number of research and development departments where computer architects work with hardware engineers and computer scientists to design the next generation of processors. In addition to general-purpose and server platform designers, many companies such as Cisco Systems, NVIDIA, and Qualcomm employ computer architects to design their next-generation application-specific or embedded systems. Computer architecture is widely recognized as being at the heart of system design for any computer system.

OSU has an active research program in the area of computer architecture and building of components that comprise system on chip (SOC) systems. Some of the areas of emphasis include prefetching, cache design, computer arithmetic systems, applications-specific architectures, compilers and hardware for enhanced floating-point performance, and cryptographic hardware. OSU is also involved in designing state of the art tools that allow complex architectures to be created that comprise billions of transistors. This is an important research topic and is crucial to the progress of scientific research. In addition to working on the important research problems of today, we study the underlying hardware and technology trends to anticipate what the relevant research problems of the next decade would be.

Graduate level courses cover topics in computer architecture, digital VLSI design, computer arithmetic, application-specific architecture design, and system on chip design. OSU utilizes commercial design tools to create these architectures from Cadence Design Systems, Synopsys, Mentor Graphics, SimICs, and SimpleScalar, just to name a few.


Source:www.ece.okstate.edu/old/Research/

Networks and Signal and Image Processing

Professor Fan, Teague, Yarlagadda
Speech and Audio Communications Laboratory
Visual Computing and Image Processing Lab

The prevalence of data processing technology has enabled the ability to mathematically manipulate data which has formerly relied on the brain for processing such as speech and images. been topics of research and media attention for decades, much more research is devoted to applications such as image and audio compression. Research at OSU is focused on areas of audio compression, image processing algorithms, hardware implementation of digital signal processing, and encryption and security. This area is one of the most active at OSU with two IEEE fellows engaged in active research.

Graduate level courses in this broad area cover topics on digital image and signal processing, neural networks, processing of speech signals, and computer vision.


Source:www.ece.okstate.edu/old/Research/

Computational Electro-magnetics

Professor Bunting, West
Robust Electromagnetic Field Testing and Simulation Lab

Research in computational E&M is becoming more and more closely tied to wireless communications, and high speed computer design. At OSU current research programs focus on the use of computational electro-magnetics to determine radar scattering of radar from rough surfaces.


Source:www.ece.okstate.edu/old/Research/

Telecommunications

Professor Scheets

Explosive growth in telecommunications, both internet and cellular has made the broad area of telecommunications research one of the hottest ones in the country. Research programs at OSU are closely affiliated with major industry players such as WorldCom based in Tulsa. The opening of the OSU Tulsa campus as well as explosive growth of the telecommunications management program at OSU promises to make telecom one of the major research areas for the foreseeable future.

Available graduate level courses include both network level and device level descriptions of fiber-optic systems, extensive laboratory courses in telecommunications systems are available with the MSTM program.


Source:www.ece.okstate.edu/old/Research/

Control Systems

Professor Fierro, Hagan, Yen
Intelligent Systems and Control Lab

With its traditional base of supporting statewide industry, it is not surprising that OSU has a strong interdisciplinary program in control systems engineering. Emphasizing neural networks and fuzzy logic, research programs are closely associated with a Master of Science in Control Systems Engineering degree program. Collaborations in the program are with Chemical Engineering, Industrial Engineering and Management, and Mechanical and Aerospace Engineering. Current research projects focus on predicting impending failures in complex interrelated structures, using assessment tools using emerging neural network and fuzzy logic technology. Additional work involves neural network based intelligent controllers capable of self-optimization, on-line adaptation and autonomous fault detection and controller reconfiguration.


Source:www.ece.okstate.edu/old/Research/

Photonics and Electro-optics

Professor Cheville, Grischkowsky, Krasinski, W. Zhang, Y. Zhang

Ultra-fast Terahertz Research Group

The science of photonics, or using light as current electrical engineering uses electrons, is the basic technology behind the current communications revolution. As higher and higher data rates are required, photonics will play an ever larger role in the successful engineer’s toolbox. Research programs at OSU specialize in ultra-fast optoelectronics, and growth and development of new optical materials, especially wide band-gap materials used for blue light generation.

The research group on terahertz (1 THz = 1000 GHz) technology is at the forefront of research in this frequency area. A new cleanroom facility dedicated to growth and analysis of materials such as GaN promises to make OSU a major player in this new semiconductor technology.
Graduate level courses include fiber optics communication systems, and ultra-fast optics. A NSF funded PhD program in photonics is cross-disciplinary between physics, chemistry, and ECEN. This program is one of a very select few in the country which offer emphasis on hands-on experimental research along with foundation courses over a wide range of disciplines.


Source:www.ece.okstate.edu/old/Research/

Energy and Power

Professor Allison, Gedra, Ramakumar

The field of energy and power has achieved national prominence due to federal deregulation of the industry. The resulting widely fluctuating energy prices and demands have led to major blackouts during times of peak usage, and unpredictable energy costs. Areas of this very broad field studied at Oklahoma State University include energy conversion, renewable energy sources and systems, reliability, electric power systems analysis, power system economics and pricing. Before deregulation, many universities discontinued power engineering, leading to a current national need for graduates.

Graduate level courses include topics such as “green” technologies involving direct energy conversion, computer analysis of power system methods, and economics and regulation of the power industry. A unique undergraduate lab facility funded by the NSF offers teaching opportunities.

Source:www.ece.okstate.edu/old/Research/

Analog and Digital VLSI

Professor Hutchens, Johnson, Stine, Y. Zhang

Mixed Signal VLSI Design Group
Wireless Sensory Systems

The revolution in communications and information technology has been fueled by less publicized advances in other areas. One of these is the ability to manufacture integrated circuits containing thousands of transistors with high reliability. OSU has active research in several types of Very Large Scale Integration (VLSI). These areas include: Mixed mode CMOS VLSI including analog, MEMS and digital electronics. As systems become smaller and smaller more capabilities are being put directly onto a wafer. The research areas sensor/transducer systems and biomedical engineering. Other areas of active research are high speed (GHz) and low power CMOS analog to digital and digital to analog converters.

Graduate level courses cover topics in digital and analog VLSI design, and advanced solid state electronics. Two cleanrooms are affiliated with OSU, and state-of-the-art probe stations are used for characterization.


Source:www.ece.okstate.edu/old/Research/

Plasmonics for efficient nonlinear components at telecom wavelengths

Routing information through an all optical network is an important issue nowadays. Because of electro-optical conversions the switches are often the bottlenecks of the network. In order to solve this problem one can devise an all optical switch.

Optical switches are based on optical bistability, this basically means that depending on the prehistory of the component a certain input can lead to two different outputs. Practically this is achieved by placing a non-linear material inside a resonant cavity, the combination of feedback and non-linearity then gives rise to bistability.

Although the Kerr-effect is a very promising candidate for the non-linear section as it means a near-instantaneous intensity dependent refractive index change, the effect is quite weak in most everyday materials. This means that one needs large incoming intensities in order to observe bistability, intensities way too large for telecom applications.

These problems can be circumvented however by using a metamaterial consisting of small (nanometer scale) metallic particles embedded in a host dielectric medium. These metallic particles confine and enhance the incoming electric field in a very small region of space, so if we succeed in placing nonlinear material in that particular region of space we will have enhanced the Kerr effect significantly (enhancement factors up to 10000 have been reported)

The main reason for this giant field enhancement is the plasmon resonance of the metallic particles. The name plasmon is used for two different types of electromagnetic phenomena. A surface plasmon is an electromagnetic wave trapped on the surface because of its interaction with the free electrons of the conductor (strictly spoken they should be called surface plasmon polariton to reflect this hybrid nature). A particle plasmon on the other hand is a dipole or multipole resonance of the free electrons of a metallic particle. Because of the bound and non-radiative nature of these plasmons the electric and magnetic fields are strongly confined to the surface of the conductor.

The excitation of surface plasmons on infinitely extending metallic films has been known for over half a century and their properties are well understood. Depending on the metal used, surface plasmons can be excited over the entire visible spectrum, from ultra-violet to infrared, including the near-infrared at the optical telecommunication frequencies. However, much work remains to be done in the telecom part of the spectrum as most research thusfar has focused on the visual and ultraviolet range.

The main goal of this project is to create a metamaterial in which the Kerr effect is sufficiently strong to observe bistability at intensities custom to telecom. Once this has been achieved three possible applications will be looked into
Intrinsic Optical Bistability: this is optical bistabilty without the presence of an external resonating cavity. The non-linear behaviour is caused by the non-linear equation connecting the applied field with the local field.
Photonic Crystal Structure: the nonlinear metamaterial can be incorporated in a photonic crystal in order to successfully couple light in and out of the structure.
Plasmonic Waveguides: eventually we will devise a much smaller structure than photonic crystals. Surface plasmons enable us to confine light to structures much smaller than the diffraction limit. A lot of different designs for plasmonic waveguides have been proposed part of this project is to select the most promising candidate amongst them. Using these plasmonic waveguides one can devise a similar structure as before which has a non-linear zone, thus creating an all optical switch.

Source:www.photonics.intec.ugent.be/research/topics.asp?ID=92

Thursday, December 10, 2009

Mobile multimedia revenues tipped to dethrone text

Multimedia services will surpass text messaging this year as the main source of mobile operators' non-voice revenue in the Asia-Pacific region, industry analyst IDC said Monday.

Driven by the rise of more technologically advanced handsets, multimedia services should reach 16.34 billion dollars, or 11 percent of total mobile revenues in the region outside of Japan, by the end of this year, it said.

Text messaging, or Short Messaging System (SMS), which has been a major earner for years because of its simplicity, is likely to contribute around 10 percent, or 14.65 billion dollars, IDC said.

Ringtones and wallpaper downloads were the early drivers of multimedia mobile services, which now include games, videos, pictures, music clips and other applications.

IDC said text messaging last year accounted for 10.3 percent of total mobile services revenues, with multimedia services contributing 10.1 percent.

"IDC predicts that SMS contribution will plateau at 10 percent for the next few years, while mobile multimedia services will continue to ride on growth trajectory," the market researcher said.

IDC data showed that by 2013, multimedia services revenues in the region outside Japan will reach 45.25 billion dollars, more than double the 18.18 billion dollars in projected revenues from text messaging.

"Today, the emergence of handsets featuring larger screens and even touch-screen interfaces has pushed the uptake of mobile multimedia services to a new level," said IDC senior research manager Alex Chau.

"This has spurred content and application developers to develop tens of thousands of applications to satisfy this new demand amongst mobile users."

New mobile handsets now come with advanced operating systems and high-speed connectivity, allowing subscribers to purchase and share content with ease.


Source:www.physorg.com/news178183201.html

Magic box for mission impossible

On September 11, firefighters, police officers and ambulance workers faced a terrifying rescue effort in the World Trade Center complex. They battled to save people from the collapsing Twin Towers, searched for survivors, tackled fires and evacuated as many people as they could in an area which contained an estimated 17,000 people. And making their jobs even harder was the problem of poor communications: frightened workers and their relatives jammed mobile networks with calls and the emergency services' own radio communications turned out to be incompatible with one another.

Ever since, emergency workers and public authorities across the world have tried to learn lessons from that unprecedented scene and some telecoms specialists have sought to provide some of the technological answers. In Europe, Norwegian, Finnish and Spanish telecoms specialists and researchers started the CELTIC project DeHiGate to develop a technology that would ensure the ability to use phones and internet even in difficult terrain and difficult circumstances. "Our idea was to make a sophisticated box that you could connect to all kinds of communications centres like satellite and wireless, a box that emergency services could take with them instead of a big satellite dish," says Vidar Karlsen, research and development manager at the Norwegian branch of French electronics firm Thales.

Thales, which initiated the idea, quickly secured interested partners, including university researchers and the Spanish telecoms operator Telefonica. They realised that such technology would also have ready application in many standard emergencies such as accidents on motorways in areas where network coverage is poor. In particular, the researchers wanted to ensure that rescue workers could receive and send each other detailed maps of areas, pictures of a disaster and other graphics and images which might make the rescue quicker or safer. To do that, they needed to ensure emergency workers would have enough bandwidth. Telefonica developed an application to estimate the bandwidth available on a network in order to make a decision on whether to connect to another network.

Karlsen says the box which the team began developing was an advanced router, which used existing hardware and equipment. The challenge was for the team to develop and test new software to make it work the way they wanted. Telefonica developed the best way to use large servers on the move, crucial work to make it easier to roll out networks in remote areas. Its workers explored the analysis of data in real time from geographical information systems.

"We had a fire and accidents, we had a scenario where we had to bring people out from a burning building," remembers Karlsen. "As a developer you always have an idea of the user's requirements, but when you see the actual requirements you realise you could never think of all that."


Firefighters took part in the simulations and gave their views on the technology. The developers set up emergency ad hoc radio stations to deploy communications and watched as firefighters made calls, used the internet and even passed video footage of the disaster back to their colleagues at the base. "With this you can get reports on digital maps and see where each and every firefighter was," says Karlsen.

DeHiGate hit hurdles, however. Initial partners struggled to secure funding or pulled out and project progress became sluggish. Developers had to use a mid-term review to get their focus back. They set themselves a deadline to develop a prototype box in time for a two-day emergency simulation in Finland.

Firefighters pointed out aspects of the technology which they would prefer to work differently, pointing out, for instance, that ordinary video cameras might not be good enough quality for areas affected by a lot of smoke and thermal cameras might be needed. However, in general they gave the new box their approval.

Since the project completed last year, Thales has continued to develop the router with a view to commercial contracts. "The direct users of this product might be limited groups - emergency workers - but what this project achieved could affect a broad group of people, whole countries," says Heinz Brueggemann, the director of CELTIC.

The knowledge gained by the partners in DeHiGate about setting up adhoc networks could also be easily transferred to other telecoms markets. Telefonica agrees. "The results and the ideas which came up in this project, both in terms of (network) architecture and applications, have been the foundation for the development of a large project about personalisation, advertising and the use of telephone directory services," said Erik Miguel Fernandez, Project Manager for Research in Information Systems at Telefonica.

A total of 411 rescue workers were among the 2,995 people who died in the September 11 attacks. During the struggle to save lives, much helpful information from 911 callers was not passed to rescuers on the ground because of poor communications. A police warning for emergency workers to evacuate the towers before they collapsed was also not well conveyed. If a system similar to DeHiGate had been in place, perhaps as many emergency worker lives might not have been lost in action. CELTIC found a way to improve the safety of those who put their lives at risk to save others.


Source:www.physorg.com/news178369250.html

S.Korea halves ceiling on text messages to fight spam

South Korean authorities on Wednesday halved the daily limit on text messages sent out by mobile phones as part of a campaign against spam, officials said.

The number of text messages that a mobile user can send out a day was restricted to 500, down from 1,000, beginning Wednesday, the Korea Communications Commission said.

The commission said the previous ceiling had been abused by spammers and was ineffective in cutting down on the junk mail.

The spreading of spam to people without consent is banned and subject to a heavy fine in South Korea, but the practice dies hard.

South Korea had 47.7 million mobiles registered for use as of October, accounting for 98 percent of the total population.


Source:www.physorg.com/news178349448.html

EU assembly adopts Internet, phone user rights

The rules endorsed Tuesday are part of a broad telecommunications package that also aims to boost competition for Internet and phone services. As a last resort, telecom companies could be required to separate their infrastructure and services businesses, giving other companies a shot at providing rival services on the same networks.

A new EU-wide telecoms authority also would be set up to ensure fair competition.

The EU's 27 nations must now implement the law in their national legislation by June 2011.

For consumers, the most visible part of the law are the new rights they would get to switch cell phone or fixed line operators within one working day and to challenge disconnections, even if they are illegally sharing copyright-protected movies or music.

A service provider would have to inform users before cutting off access because of a copyright violation, and those users would be able to appeal to a national court.

Internet users still won't have an automatic right to Internet access - as some EU lawmakers had originally intended. The European Parliament dropped that guarantee because of concerns it could hinder French and British efforts to cut off Internet access to persistent file sharers.

Source:www.physorg.com/news178374890.html

Roku adds more 'channels' of video and other digital content

The company is opening a new "channel store" from which users can add access to a variety of content, including digital photographs from Flickr and Facebook, Internet radio from Pandora, and Web videos and video podcasts from Revision3 and blip.tv. In all, customers will be able to choose from 10 new channels.

The new channels are the first fruits of Saratoga, Calif.-based Roku's move to open up its video player to outside developers. The company expects to add additional channels in the "near future," company officials said.

Roku owners will gain access to the new channels via a software update the company plans to send out to their devices over the next two weeks. Alternatively, owners can download the update manually.

The company is providing the new channels -- and the content available through them -- for free. However, Roku player owners will have to create a user account with the company in order to add them.

Introduced last year, Roku's digital video player was the first device to allow Netflix customers to watch on their TVs movies and television shows offered through Netflix's streaming video service. The device drew praise from technology reviewers for its simplicity and its $100 price, which was far less expensive than comparable set-top boxes.

Earlier this year, Roku made the device more attractive by adding access to Major League Baseball games and to movies and television shows rented and sold by Amazon.com.

Last month, the company introduced two new player models: an $80 version, the Roku SD, which doesn't support high-definition video, and a $130 version, the Roku HD XR, which supports the latest -- and fastest -- Wi-Fi networking technology.

The idea of bringing Internet-based and digitally distributed video to the living room has been around for years but has yet to see widespread consumer use. Among the other devices that offer such services are Apple's Apple TV, TiVo's digital video recorders, Sony's PlayStation 3 and Microsoft's Xbox 360. Many of those devices have added access to services similar to those now available on Roku.

What's still missing on nearly all of them is access to Hulu and other similar services that stream ad-supported television shows over the Web, noted Kurt Scherf, vice president and principal analyst at Parks Associates, a technology research firm.

Adding access to channels such as Facebook and blip.tv. "is a nice addition, but it's not the jackpot content," Scherf said. As such, "it ain't a cable killer."



Source:www.physorg.com/news178462170.html

Sprint to stop selling certain push-to-talk phones

Scott Sloat, a spokesman for the nation's third-largest wireless provider, said Sprint will still support customers who have phones with the technology, known as QChat, but will no longer introduce new phones with the feature.

Sprint introduced QChat last year as a potential replacement for the push-to-talk service on its Nextel-branded iDEN network, which is a mainstay among dispatchers, contractors and other business users. Some analysts were pushing Sprint to jettison iDEN, which Sprint acquired with Nextel in 2005 but has been losing thousands of subscribers every quarter.

Since then, Sprint says the iDEN network's technical performance has improved and the company has used it for subscribers on the Boost Mobile brand.

Sloat wouldn't say how many QChat customers Sprint has, but he said iDEN customers account for the bulk of the company's push-to-talk users.


Source:www.physorg.com/news178824354.html

Broadband stimulus moves at dial-up speeds

Unfortunately, waiting is about all Morgenthaler has done since he applied for a slice of the more than $7 billion in so-called "broadband stimulus" funding. That money was part of the $787 billion federal package adopted in February to give a jolt to the economy.

That's an awesome amount of money. But the big number has created unrealistic expectations about the size and the speed of its impact. Already there's a debate raging about whether the stimulus funding has worked.

Here's the reality: Most of that stimulus money hasn't been spent. According to figures from the Recovery Accountability and Transparency Board, a government agency that monitors stimulus spending, only $234.2 billion had been allocated as of Nov. 20. And when you consider that almost one-third of the stimulus money is tax credits, the amount of cash actually being injected into the economy is even lower.

As for the remaining money, well, spending such a large sum is a lot harder -- and slower -- than it looks.

To understand why, and just how hard it is for the government to stimulate the economy, there's no better illustration than the repeatedly delayed attempts to get broadband stimulus funding into the hands of people like Morgenthaler.

Surfnet is in Los Gatos up in the mountains. The company's mission is to provide high-speed Internet connections to residents of Santa Cruz County who otherwise don't have broadband access.

When Morgenthaler heard about the stimulus funding, he saw an opportunity to do something more ambitious.

But right from the start, the nature of the funding created a tricky balance between spending the money fast, and spending it right.

It didn't help matters that the program was being run jointly by two federal agencies: the National Telecommunications and Information Administration and the U.S. Department of Agriculture's Rural Utilities Services Program. They took five months to write the rules _ and then gave applicants only 40 days to apply for the first round of funding.

"That was a lot of stress on us," Morgenthaler said. "And I think the quality of a lot of the proposals really suffered."


Morgenthaler scrambled. He teamed up with the Central Coast Broadband Consortium and the city of Grover Beach to submit a request for about $3 million in grants and loans. The group hopes to use that money to extend broadband coverage to 74,000 households and 2,000 businesses from Santa Cruz down to San Luis Obispo.

But since Morgenthaler submitted his proposal in August, there has been a deadline extension for applications and then a delay in awarding the first round of grants from November to December. And even now, the agencies are retooling the rules for the next round, leading to more confusion.

I called the NTIA to discuss the progress of the program, and a spokesman sent me a copy of testimony given to a Senate oversight committee last month by Lawrence E. Strickling, an assistant secretary at NTIA.

Strickling explained that the agencies were trying to balance the need for speed with the desire to get things right.

"NTIA is committed to ensuring that taxpayers' money is spent wisely and efficiently," he testified. "We have been working with the Department of Commerce's Inspector General to design this program in a manner that minimizes the risk of waste, fraud, and abuse."

Fair enough. And even Morgenthaler understands the need for taking time to make sure they make the right decisions.

But if people are going to believe the federal government can be competent and effective, then getting things done right and getting them done fast can't be a trade-off. They need to be standard procedure.



Source:www.physorg.com/news179052810.html

Comcast-NBC deal shows future is in content

It's understandable why the strategy might seem dubious: Another media company, Time Warner Inc., just gave up on that and spun off its cable TV division.

Yet while Comcast seems to be taking a different approach - marrying entertainment content with the largest cable TV system in the nation - it and Time Warner have arrived at the same conclusion: The future is in content, and the pipes that carry it matter less.

That's why Time Warner could jettison the business of selling subscription TV service and focus on the Warner Bros. movie studio, cable channels such as CNN and HBO and magazines such as People and Sports Illustrated.

Comcast's hard-wired delivery system serves a quarter of the nation's pay TV households and isn't about to be thrown overboard. But Comcast has decided it must be much more than a cable TV provider. That's why CEO Brian Roberts tried - unsuccessfully - to buy Walt Disney Co. for $54 billion in 2004.

Even after acquiring or developing cable TV channels including E! and Golf Channel and sources of programming such as the Philadelphia Flyers and 76ers, cable TV and other services running through Comcast's pipes make up 95 percent of its revenue. That would drop to 65 percent if the NBC Universal deal goes through, giving Comcast control of the Peacock network, cable channels such as USA, Bravo and Syfy and the Universal Pictures studio.

And over time cable TV figures to matter even less, as people watch more video on PCs and cell phones or through video game consoles connected to the Internet.

Content companies such as Time Warner and NBC Universal are already trying to serve consumers in multiple ways, including online, and are less tied to one format over another.


"Cable is under the most threat and is the most motivated to take this new relationship and make it work for consumers," said James McQuivey, a media and technology analyst for Forrester Research. "If NBC were to own Comcast, I would not be nearly as optimistic about this."

While the reasoning for its NBC Universal deal might be sound, it still would take years to show the bet was worthwhile. Comcast is spending $13.75 billion in cash and assets to get a controlling stake in NBC Universal from General Electric Co. It is creating a large - and potentially hard to manage - conglomerate that also includes theme parks.

"Bigger is just plain not better," said Stephen Farnsworth, an assistant professor of communication at George Mason University.

The deal announced Thursday is expected to close in nine months to a year if regulators and shareholders approve. GE would first buy Vivendi SA's 20 percent stake in NBC Universal for $5.8 billion. GE and Comcast would then form a NBC Universal joint venture, with Comcast owning a 51 percent stake.

Comcast, which is based in Philadelphia, would pay $6.5 billion in cash to GE and contribute $7.25 billion worth of cable channels it owns.

GE would retain a 49 percent stake, with the option of unloading half its stake in 3 1/2 years and all of it in seven years. The new NBC Universal would borrow $9.1 billion and pay that amount to GE.

Consumer groups already are worried that Comcast would wield too much power over entertainment. Congressional hearings are being planned. Although the government probably won't block a deal outright, regulators could prohibit Comcast from, for instance, denying rival subscription-TV services such as DirecTV access to NBC channels and other popular programming.

One reason for Comcast to pounce now is that NBC Universal is suffering through hard times. Advertising revenue has slowed, NBC is the fourth-ranked network, theme park attendance is weak and Universal Pictures has suffered box-office bombs.

But the larger motivation is that Comcast wants more programming - particularly from NBC Universal's cable channels - to deliver to its subscribers and to sell to other distributors.

It's likely hoping for parallels with a successful media deal: Disney's $19 billion purchase of Cap Cities/ABC in 1995.

That deal showed that the right content can be key: It gave ESPN to Disney, and the sports network has kept Disney solidly profitable even as ABC, its theme parks and movie studio faltered.

There are also lessons in AOL's purchase of Time Warner Inc. in 2001 for $147 billion in stock. It is considered one of the worst deals of all time and is being undone for good next week. At the time, Time Warner thought its movie, TV and magazine content would benefit from ties to two forms of distribution - AOL's Internet access business and Time Warner's cable services. Ultimately, among other problems, neither distribution channel significantly enhanced the value of the content.

This is not to say cable companies are necessarily successful at broadening the content they own. Cablevision Systems Corp. owns the AMC cable channel and Newsday newspaper, plus the New York Rangers and the New York Knicks and their arena, Madison Square Garden. And yet it's spinning off the Madison Square Garden assets to make it easier for investors to buy a more focused business.

Comcast's own development as an owner of content in recent years hasn't even proven itself yet. Comcast's shares now trade for about half of this decade's high of $29.65, adjusted for splits, hit in January 2007. On Thursday, however, Comcast shares rose by 6.5 percent to $15.91, as investors applauded a 40 percent dividend increase the company announced along with the NBC Universal plans.

Because the services that require its own cable pipes account for virtually all of Comcast's revenue, Comcast decided it had no choice but to accelerate its drive for content that could be delivered in many forms.

In 2006, Comcast acquired thePlatform, a delivery system for online video. It also launched Fancast, a site that lets viewers watch full episodes of TV shows and movies online with advertisements, much like Hulu, the joint venture between NBC Universal, News Corp.'s Fox and Disney.

Comcast says the combination of content from NBC channels and Universal Pictures with its cable and Internet distribution network gives it dozens of ideas about how to make money from new methods of delivery and promotion. Already it's about to launch a test in which paying subscribers can access cable channel shows online.

"With this transaction, our company is strategically complete," Comcast's Roberts said Thursday.

But given that content is king, Comcast might not be done yet.


Source:www.physorg.com/news179117964.html

FCC asks Verizon Wireless to explain fees

In November, the carrier hiked the maximum early contract termination fee for smart phones to $350 from $175. Like other carriers, Verizon subsidizes the cost of the devices for contract-signing customers, then expects to make that money back in service fees over the term of the contract.

"Smart phones quickly became a major part of our business and cost us a whole lot more," Verizon spokesman Jeffrey Nelson said.

The FCC's letter to Verizon asks how consumers will know whether the increased fee applies to their phone, and whether it's spelled out anywhere except in the formal customer agreement.

The FCC is also asking the carrier about $1.99 data access fees that have appeared on the bills of customers who don't have data plans, but accidentally initiates data access by hitting a button on their phones. Verizon says that as of a few months ago, it doesn't charge when a customer starts a data service, then quickly turns it off.

The Plain Dealer in Cleveland tapped into a vein of frustration among Verizon customers in columns on the issue in August.


Source:www.physorg.com/news179171423.html

A special kind of flight training

Whether for a business trip to a neighbouring country or a holiday in the Caribbean: What most people take for granted, actually poses a great challenge not only for the transport business, but also particularly for pilots. The goal of the European Union project SUPRA, funded with 3.7 million Euro, is to train pilots in the best manner possible and prepare them for hazardous scenarios.

Scientists from nine institutions and industrial enterprises aim to investigate motion perception in extreme situations as well as to improve flight simulators, thereby making an important contribution to enhanced aviation safety. Researchers from the Max Planck Institute for Biological Cybernetics in Tübingen, Germany, will contribute to the biological foundations of understanding how pilots become disoriented in extreme flight conditions, and how balance and visual information combine in the brain.

Student pilots receive ever increasing amounts of training in simulators, combined with flight training in real aircraft. This saves money, helps protect the environment and above all, is a safer form of training. Standard flight manoeuvres, such as take-off and landing, can already be properly trained with current flight simulator technology. Extreme manoeuvres, such as recovery from loss of control are much more complex and difficult to simulate. One of the problems the interdisciplinary research team seeks to resolve is the lack of an appropriate algorithm to optimize the motion within the limited workspace of any simulator for such extreme conditions. Within the framework of the three-year SUPRA project (Simulation of Upset Recovery in Aviation), their goal is to improve the simulation of such complex flight manoeuvres and to develop a new generation of flight simulators.

At first, relevant training scenarios must be chosen for the experiments. This will be done in close cooperation with professional test pilots, who have already acquired much experience with such extreme conditions. The scientists, under the direction of Heinrich H. Bülthoff at the Max Planck Institute for Biological Cybernetics, hope to discover how pilots perceive aircraft motion during the extreme situations and why they can become spatially disoriented. They are particularly interested in the interaction of vision and signals the brain receives from the balance organs in the inner ear. With the help of a robotic arm, test persons will be exposed to a variety of accelerations, while simultaneously viewing a computer-generated virtual environment. By using the appropriate stimulation of both the visual and balance systems, it is possible to "trick" the brain in such a way that the pilot perceives an actual flight manoeuvre, rather than the laboratory. For example, the scientists are able to give an impression of acceleration with purely visual stimulation, although not actually providing real motion. This perception can be enhanced by providing a suitable actual motion. This type of illusion of motion is used in flight simulators to produce a perception of motion that would not otherwise be possible due to the limited workspace.


The international consortium makes use of two completely new types of simulators that exist in the Dutch research institute, TNO, and in the Max Planck Institute for Biological Cybernetics in Tübingen, Germany. "In these times of ever increasing mobility, thorough training of new pilots is an important theme. We are pleased that the European Union has provided us with the opportunity to work with an international team to make an important contribution to flight safety by improving pilot training", stated Heinrich H. Bülthoff at the start of the project.


Source:www.physorg.com/news179519168.html

Google search getting eyes and ears

Google search is getting eyes and ears, moving beyond typed key words to let people scour the Internet with mobile telephone cameras or spoken words in multiple languages.

Google on Monday unveiled "Goggles" software that lets people search online using pictures taken with cameras in mobile phones based on its Android operating system.

"When you take a mobile phone camera and connect it to the Internet, it becomes an eye," Google mobile search vice president of engineering Vic Gundotra said while demonstrating Goggles in Mountain View, California.

"Google Goggles lets you take a picture of an item and use the picture as the query."

An experimental version of Goggles will be available for people at Google Labs website. Goggles already recognizes books, wine labels, CD covers, landmarks and more, according to Gundotra.

He demonstrated by taking a picture of a wine bottle label with a smart phone and almost instantly getting reviews, pictures and other Internet data about the vintage in a Google search results Web page.

"It is our goal to visually identify any image," Gundotra said.

"It is in Google Labs because of the nascent nature of computer vision. In the future, you will be able to point (a camera phone) and we will be able to treat it as a mouse pointer for the real world."

Google on Monday also added Japanese to a voice-based search service first rolled out about a year ago.

People can now speak Google search subjects into smart phones in English, Mandarin, or Japanese.

"In addition to voice search, Google has huge investments in translation," Gundotra said. "Our goal at Google is nothing less than being able to search in all major languages of the world."

The California Internet colossus is aiming to deliver a translation service to mobile telephones some time in 2010, according to Gundotra.

People will be able to speak into a mobile telephone to have sentences translated into other languages and delivered back quickly in text and audio forms, Gundotra said while demonstrating an early version of the service.

He also showed a "near me now" feature that uses global positioning capabilities in Android-based smart phones to customize map results to show shops, attractions, restaurants or other offerings that are in easy reach.

"In the future, there will be many different ways of searching," said Marissa Mayer, Google's vice president of search products and user experience.

"We really foresee a world where you can search and find your answer where ever it exists and whatever language it is in."



Source:www.physorg.com/news179482371.html

Life after silicon: Using exotic materials to help microchips keep improving

Researchers in MIT’s Microsystems Technology Laboratories, led by professor of electrical engineering Jesús del Alamo, have been investigating whether transistors made from more exotic materials can keep the processing power coming. At the International Electron Devices Meeting this week in Baltimore — the premier conference on microelectronics — they are presenting four separate papers that offer cause for hope.

Del Alamo’s group works with compound semiconductors, so called because, unlike silicon, they’re compounds of several other materials. In particular, the group works with materials that combine elements from columns III and V of the periodic table and have names like gallium arsenide and indium gallium arsenide.

Electrons travel through these so-called III-V materials much more rapidly than they do through silicon, and III-V semiconductors have been used for years in high-speed electronics, such as the devices that process data in fiber-optic networks. But according to del Alamo, the transistors in III-V optical components are “larger by several orders of magnitude” than the transistors in computer chips. Whether they can maintain their performance advantages at dramatically smaller scales is the question that del Alamo’s group is tackling.

In a computer chip, transistors serve as on-off switches that help execute logic operations — comparing two values, for instance, or performing arithmetic functions. But transistors can also be used to amplify electrical signals — as they do in transistor radios. Last year, del Alamo’s group built a III-V transistor that set a world record for high-frequency operation, meaning that it was able to amplify higher-frequency signals than any previous transistor. While that gives some sense of the transistor’s capacities, two of the four papers being presented in Baltimore assess properties of the transistor that better predict its performance as a logic element.


To measure those properties, del Alamo says, the group built chips with multiple transistors that were identical except for the length of their most critical element, called the gate. If a transistor is a switch, the gate is what throws it. When the gate is electrically charged, it exerts an electrostatic force on a semiconductor layer beneath it; that force is what determines whether the semiconductor can conduct electricity or not.

By comparing the performance of transistors with different gate lengths at different frequencies, del Alamo’s group was able to extract precise measurements of both the velocity of the electrons passing through the transistor and the electrostatic force that the gate exerted on the semiconductor layer.

Electron velocity, del Alamo says, is the “key velocity that is going to set the performance of a future logic switch based on these kinds of materials, and we have obtained velocities that are easily two and a half times higher than the best silicon transistors made today.” While the electrostatic force exerted by the gate was lower than the researchers had hoped, measuring it so precisely allowed del Alamo’s group to develop better physical models of III-V transistors’ behavior. On the basis of those models, del Alamo says, he believes that the gate’s performance is a “manageable problem.”

“Before the industry is willing to switch from one technology to the next technology, people in the industry would want to make sure that they were making the right bet, the right investment,” says Robert Chau, a senior fellow at chip giant Intel who has worked closely with del Alamo’s group. “So obviously, people will need to thoroughly understand the physics behind this proposed new technology. I think that’s what Professor del Alamo’s group has been doing — getting to the science of the device operation.”

The third paper is in a similar vein, a collaboration with researchers at Purdue University who have developed simulators to model the performance of III-V transistors that are even smaller than the MIT prototypes. Del Alamo’s group used its precise measurements of the prototypes’ performance to help the Purdue team calibrate its simulator. The fourth paper, however, addresses a different topic: it proposes a new design for III-V transistors that, del Alamo says, will work better at smaller scales, because it permits a thinner layer of material to separate the gate and the semiconductor material beneath it.

For all their speed, III-V semiconductors have a disadvantage as a next-generation chip material: they’re rarer, and therefore more expensive, than silicon. But while the MIT prototypes were built entirely with III-V materials, del Alamo envisions “a silicon-like technology where just under the gate … you take silicon out and stick in indium gallium arsenide. It’s a minute amount of material that is required in every transistor to make this happen.” Indeed, “III-V is not really in competition with silicon,” Chau agrees. “It still will be a silicon transistor. It’s just that you’re using this non-silicon element to make it even better.”



Source:www.physorg.com/news179518970.html