Oct. 16, 2012 /PRNewswire/ -- Xilinx, Inc. (NASDAQ: XLNX) today set the stage for an acceleration of the automotive industry's development and deployment of a new generation of automotive driver assistance systems (ADAS). At the
Society of Automotive Engineers (SAE) Convergence 2012 (booth #909) event, Xilinx unveiled its automotive ARM®-processor based
Zynq™-7000 All Programmable system-on-a-chip (SoC) platform that can reduce the cost and the time-to-market of driver assistance solutions by using programmable system integration to lower bill-of-materials while meeting the sophisticated technical requirements behind systems requiring driver assurance-critical image-to-vision and in-vehicle networking capabilities.
"The ADAS space is evolving rapidly and Xilinx's automotive grade Zynq-7000 All Programmable SoC is the game changer the industry needs to accelerate the pace of ADAS technology deployment," said
, director of Xilinx's automotive segment. "The Zynq-7000 family allows ADAS developers to implement a familiar software-based system, but with closely coupled, fully customized, hardware accelerators that deliver a level of raw image processing performance and low power consumption that is simply not achievable with traditional multi-chip approaches."
DiFiore added that ADAS suppliers are already using Zynq-7000 devices rather than 'pre-canned' ASSPs to combine off-the-shelf IP from the Xilinx ecosystem along with their own proprietary IP and algorithms to differentiate themselves in the market, "all without the huge costs and time-to-market penalties that make ASIC development impractical – it's a win/win in this hotly contested market."
Automakers are bundling the current generation of ADAS applications – which includes blind spot detection, lane departure warning systems, automatic parking assistance, collision avoidance, pedestrian detection and driver drowsiness detection – as they seek to provide drivers with multiple safety features at lower costs. Common to both current and future ADAS applications is the use of a variety of cameras and ultrasonic sensors in combination with specialized, real-time processing systems, a prime example of the image-to-vision capabilities that Xilinx is putting particular focus on across all its markets. Currently these systems use multiple chips for the required processing, which keep BOM costs high and reduce flexibility options to scale between vehicle platforms.