Students and faculty in the Department of Electrical, Computer, Software, and Systems Engineering are some of the more prolific researchers in the Embry-Riddle family. The department's research expenditures are nearly one-half those of the entire College of Engineering, with support from federal agencies including NSF, FAA, and NOAA as well as industry partners. The department is heavily involved in projects managed by ERAU's NEAR Lab and by the COE's Eagle Flight Research Center.

Strategic department research directions include three areas critical for the future of aerospace. These are:

  • Detect and avoid technologies for unmanned aircraft systems;
  • Assured systems for aerospace, including cybersecurity and development assurance;
  • Modeling and simulation for aviation and aerospace.

Detect and avoid technologies enable unmanned aircraft systems to "see and be seen" by other aircraft and by air traffic controllers on the ground. Of particular challenge is detect and avoid of uncooperative aircraft, those aircraft that aren't equipped to announce their position either automatically or in response to interrogations from the ground.

Assured systems are those that are robust in the face of cybersecurity challenges, with assured development being system design approaches that yield assured systems without high overhead.

Modeling and simulation for aviation involves everything from the logistics of getting passengers onto aircraft to planning how to get all air traffic around predicted bad weather without upsetting arrival times and locations.

Design Verification of Airborne AI/ML Systems

​The verification process of safety-critical systems must ensure system design performs all intended functionality within the required output ranges and safety limits. It must also ensure that no intended functionality is present having a risk larger than the stated development assurance level.

The verification process of safety-critical systems must ensure system design performs all intended functionality within the required output ranges and safety limits. It must also ensure that no intended functionality is present having a risk larger than the stated development assurance level. The objective of the AI/ML-based system is to assist with the detection of unintended behavior during operations that results in enhanced online hazard analysis and risk mitigation. Validation and verification techniques must be developed for these systems with the future goal of adopting them in airborne operations.