Bitvis is an intel DSN partner. The Intel FPGA Design Solutions Network (DSN) is an ecosystem of experienced, independent worldwide companies that provide customers valuable products and services that complement Intel's® FPGA, SoC and Enpirion semiconductor devices. As an intel DSN partner, we can offer a broad range of products and services including boards, IP, engineering services, development tools and training to help customers accelerate product development and reduce time-to-market.
The Xilinx Alliance Program is a worldwide ecosystem of qualified companies collaborating with Xilinx to further the development of All Programmable technologies. Leveraging open platforms and standards, Xilinx has built this ecosystem to meet customer needs and is committed to its long-term success. Comprised of IP providers, EDA vendors, embedded software providers, system integrators, and hardware suppliers, Alliance members help accelerate design productivity while minimizing risk.
Microsemi has been developing space solutions for almost six decades and has played an important role in a wide variety of space programs globally. The company has a proven track record for innovation, quality and reliability and continues to build on that legacy with an impressive portfolio of industry leading new product and technology introductions. Microsemi's high-reliability products and solutions have been used in applications that require high levels of radiation hardness for trips to the Moon, Mars and beyond. With one of the industry's most comprehensive portfolios of space products, Microsemi provides radiation hardened and radiation-tolerant. Bitvis have established extensive knowledge within the RTAX family and several space implementations related to this technology.
Mentor is leading the way in the EDA industry with solutions that address our customers need for Integrated Systems Design, hardware/software co-design, NT-based solutions and multiplatform environments. The key to success is the strong partner solutions that are available through the OpenDoor Program™. Coupling the Mentor offerings with the OpenDoor partner integration solutions shortens time-to-profit on new products in highly competitive markets.
Design architecture is probably the single most important issue with respect to development efficiency and quality. Unfortunately most FPGA modules are not at all properly structured. There is of course always some kind of structure, but often not even close to a good and efficient structure. Architecture is key at all levels from top level to deep down into the module micro architecture. A good design structure yields a good overview and understanding, easier extendibility and maintainability, and far better modification and reuse capability. A very important side effect is lower power consumption, better frequency performance and a smaller FPGA/ASIC footprint.
For some applications or functionality various types of high level design can be extremely efficient and allow a much faster exploration of the solution space both with respect to algorithms and logic implementation. Typical high level design could be ‘C’, Open CL, Matlab, Filter generators, etc, from which you could automatically generate RTL code or netlists. Some times this could also be a manual translation to RTL – depending on the source and the complexity. Normally verification of the functionality could be handled using the same tool or language as for design. If so, then the verification of the RTL or netlist is normally a very simple testbench just to verify that the generation/translation/synthesis was successful. Platform tools that allow fast connection of IP to generate for instance a micro controller system is often also considered high level design. it could be very fast and efficient, but It would probably be more correct to call it an IP connection tool.
An FPGA could solve lots of design challenges and allow you to build a better, cheaper, smaller and safer product. There are however numerous challenges and pitfalls when designing a complex FPGA. Even though an FPGA is designed using a software like approach, it is very important to be aware that this is actually real hardware. Hence a detailed understanding of Digital design is required for most applications. The major challenges are design architecture, parallelism and cycle relations, timing, clock domain crossing, resets, verification approach, testbench architecture, HW understanding, performance, and sometimes also low power, design for simulation, testability, etc.. We often see FPGA designers stumble here, but it is all a matter of good design, good verification, experience and knowhow. This is why we have a dedicated 2-day course on FPGA design, and 3-day course on Verification.
Exactly the same challenges exist for verification architecture as for design architecture. A good testbench structure is the key to readability, overview, maintainability, extensibility, simplicity and reuse, - which of course all are critical for efficient and good verification. A self checking testbench is mandatory whenever possible, and for that you need a good testbench infrastructure like UVVM Utility Library, which is open source and used world-wide. Logging, alert handling, checkers, expectors, BFMs, etc are obvious functionality inside any testbench. For larger or more complex design modules or FPGA, maybe with multiple simultaneously operating interfaces, a proper test harness architecture is required. Our open source UVVM VVC Framework is the only standardised testbench and verification component system in the VHDL world, and it reduces the development time significantly, while of course also improving the quality. Other important elements of structured verification are transaction based interface handling (BFMs and VVCs), direct testing, constrained random, code coverage, functional coverage, corner case coverage, error injection, scoreboarding, etc.
For any efficient FPGA development it is important to start all critical phases early. This means that design and verification should be an iterative process, where verification is always handled concurrently with the design. This is of course not limited to design and verification, but also applies to documentation, synthesis and place & route.
Designers tend to fall into the same pitfalls time after time, but this could really be avoided if the various design and verification challenges were handled in a structured manner. It is again only a question of structure, architecture, methodology, knowledge and experience. Our courses help designers become more aware of the typical pitfalls and best practice design technique.
Some applications have extreme performance requirements. Others should use close to no power at all. For these applications it is very important to first evaluate system aspects and architecture. When it comes to the FPGA design it self, there are also lots of elements to consider. The most obvious here is again – the architecture. Then of course there are also lots of other issues to consider – like the actual coding, resource sharing, pipelining and clock domains. Tuning the constraints, synthesis and place & route is the least important factor here, but is still definitely worth doing properly.
The FPGA is an important part of many electronic/embedded systems, but is it important to understand the whole picture in order to make a good product. Splitting the system functionality between a board level micro controller, DSP, FPGA logic, FPGA uCtrl, on and off chip software, on and off chip peripherals, etc. could be critical to achieve your goals, whether that is product cost, development cost, schedule, low power, performance, MTBF or size. Sometimes a large FPGA containing almost the complete system is the best solution, but sometimes splitting the functionality between an FPGA and an external CPU could also be the best solution, - and sometimes not using an FPGA at all may also be your best choice. There are lots of issues to consider here.
A feasibility study could be at very different levels. It could even be to find the best system architecture independent of FPGA or not. Normally it should involve evaluating different system solutions, FPGA technologies, FPGA architecture, IP, tools and of course how to achieve various requirements like high performance, low power, technology, device size, complex challenges, etc.
Sometimes it is important to find out which technology and which device is required to satisfy the specification requirements, and sometimes which will result in the lowest product or development cost. For some product the cost difference of two neighbouring device sizes could be critical, in which case only the combination of feasibility study, specification adaption, design architecture and device selection will give a good result.
The design architecture will affect your final product the most with respect to almost all parameters, but the actual HDL coding and your constraints for synthesis and place & route is also important. The way you write your code is often critical in order to get the right result. In fact for many projects you should really understand exactly what the synthesizer tool will generate for a given piece of HDL code, as otherwise you might get ugly surprises late in your project if for instance you cannot get the required performance or strange unexpected errors pop up now and then.
It is normally a very good idea to have a sparring partner to discuss everything from specification features, via design architecture, to advanced solutions on complex challenges. The overhead of such a sparring partner could be anything from 5% to 15%, but this is easily saved when reaching a better solution or avoiding a time consuming or error prone pitfall. Many companies have reviews that are close to useless. It is important that sufficient time is allowed for the reviewers to do a proper job, or otherwise it will be just scratching the surface. The level of ambition could of course vary a lot from one application to another, so this should be agreed upon up front. For FPGAs we typically differentiate between a functional review and a structural review, again typically depending on application and the required quality level.
Almost all FPGA testbenches should be 100% self-checking. When this is the case, making test suites for regression is rather simple. Concurrent design and verification is important to allow a fast and smooth development, where the complete functionality can be verified immediately after making a design change – just by running a predefined set of tests.
Quite a few FPGA designs are being made by designers who are not familiar enough with FPGA design. Typically SW, DSP or HW designers, or someone just doing occasional FPGA design – and thus maybe not fully understanding all the aspects of an advanced technology. This could result in strange, “unexplainable” bugs, in error prone products, schedule delays, etc. For these designers it could be extremely useful to have someone to discuss architectures, solutions, code, synthesis, etc. with, and maybe also attend our design, verification or methodology courses.
Some products are more critical than others. It is important to know how to attack both a very simple, non-critical application and a mission critical or safety critical application. The most important factors for critical applications are no doubt design architecture, good coding, pitfall prevention and verification. To handle this properly it is important to understand all the above aspects.