Networking Working Group J. Martocci, Ed. Internet-Draft Johnson Controls Inc. Intended status: Informational Pieter De Mil Expires: August 3, 2009 Ghent University IBCN W. Vermeylen Arts Centre Vooruit Nicolas Riou Schneider Electric February 3, 2009 Building Automation Routing Requirements in Low Power and Lossy Networks draft-ietf-roll-building-routing-reqs-05 Status of this Memo This Internet-Draft is submitted to IETF in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire on August 3, 2009. Copyright Notice Copyright (c) 2009 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Martocci Expires August 3, 2009 [Page 1] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 Abstract The Routing Over Low power and Lossy network (ROLL) Working Group has been chartered to work on routing solutions for Low Power and Lossy networks (LLN) in various markets: Industrial, Commercial (Building), Home and Urban. Pursuant to this effort, this document defines the routing requirements for building automation. Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. Table of Contents 1. Terminology....................................................4 2. Introduction...................................................4 3. Facility Management System (FMS) Topology......................5 3.1. Introduction..............................................5 3.2. Sensors/Actuators.........................................7 3.3. Area Controllers..........................................7 3.4. Zone Controllers..........................................7 4. Installation Methods...........................................7 4.1. Wired Communication Media.................................7 4.2. Device Density............................................8 4.2.1. HVAC Device Density..................................8 4.2.2. Fire Device Density..................................9 4.2.3. Lighting Device Density..............................9 4.2.4. Physical Security Device Density.....................9 4.3. Installation Procedure....................................9 5. Building Automation Routing Requirements......................10 5.1. Installation.............................................10 5.1.1. Zero-Configuration Installation.....................11 5.1.2. Sleeping Devices....................................11 5.1.3. Local Testing.......................................11 5.1.4. Device Replacement..................................12 5.2. Scalability..............................................12 5.2.1. Network Domain......................................12 5.2.2. Peer-to-Peer Communication..........................12 5.3. Mobility.................................................13 5.3.1. Mobile Device Requirements..........................13 5.4. Resource Constrained Devices.............................14 5.4.1. Limited Processing Power for Non-routing Devices....14 5.4.2. Limited Processing Power for Routing Devices........14 5.5. Addressing...............................................14 5.5.1. Unicast/Multicast/Anycast...........................14 Martocci,(et al) Expires August 3, 2009 [Page 2] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 5.6. Manageability............................................14 5.6.1. Diagnostics.........................................15 5.6.2. Route Tracking......................................15 5.7. Route Selection..........................................15 5.7.1. Path Cost...........................................15 5.7.2. Path Adaptation.....................................15 5.7.3. Route Redundancy....................................16 5.7.4. Route Discovery Time................................16 5.7.5. Route Preference....................................16 6. Traffic Pattern...............................................16 7. Security Considerations.......................................17 7.1. Security Requirements....................................17 7.1.1. Authentication......................................17 7.1.2. Encryption..........................................18 7.1.3. Disparate Security Policies.........................18 7.1.4. Routing Security Policies To Sleeping Devices.......18 8. IANA Considerations...........................................19 9. Acknowledgments...............................................19 10. References...................................................19 10.1. Normative References....................................19 10.2. Informative References..................................19 11. Appendix A: Additional Building Requirements.................20 11.1. Additional Commercial Product Requirements..............20 11.1.1. Cost...............................................20 11.1.2. Wired and Wireless Implementations.................20 11.1.3. World-wide Applicability...........................20 11.1.4. Support of Application Layer Protocols.............20 11.1.5. Use of Constrained Devices.........................21 11.2. Additional Installation and Commissioning Requirements..21 11.2.1. Device Setup Time..................................21 11.2.2. Unavailability of an IP network....................21 11.3. Additional Network Requirements.........................21 11.3.1. TCP/UDP............................................21 11.3.2. Interference Mitigation............................21 11.3.3. Real-time Performance Measures.....................21 11.3.4. Packet Reliability.................................22 11.3.5. Merging Commissioned Islands.......................22 11.3.6. Adjustable System Table Sizes......................22 11.3.7. Communication Distance.............................22 11.3.8. Automatic Gain Control.............................22 11.3.9. IPv4 Compatibility.................................23 11.3.10. Proxying for Sleeping Devices.....................23 11.3.11. Device and Network Integrity......................23 11.4. Additional Performance Requirements.....................23 11.4.1. Data Rate Performance..............................23 11.4.2. Firmware Upgrades..................................23 11.4.3. Prioritized Routing................................23 Martocci,(et al) Expires August 3, 2009 [Page 3] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 11.4.4. Path Persistence...................................24 11.5. Additional Network Security Requirements................24 11.5.1. Encryption Levels..................................24 11.5.2. Security Policy Flexibility........................24 12. Appendix B: FMS Use-Cases....................................24 12.1. Locking and Unlocking the Building......................25 12.2. Building Energy Conservation............................25 12.3. Inventory and Remote Diagnosis of Safety Equipment......25 12.4. Life Cycle of Field Devices.............................26 12.5. Surveillance............................................26 12.6. Emergency...............................................26 12.7. Public Address..........................................27 1. Terminology For description of the terminology used in this specification, please see [I-D.ietf-roll-terminology]. 2. Introduction Commercial buildings have been fitted with pneumatic and subsequently electronic communication pathways connecting sensors to their controllers for over one hundred years. Recent economic and technical advances in wireless communication allow facilities to increasingly utilize a wireless solution in lieu of a wired solution; thereby reducing installation costs while maintaining highly reliant communication. The cost benefits and ease of installation of wireless sensors allow customers to further instrument their facilities with additional sensors; providing tighter control while yielding increased energy savings. Wireless solutions will be adapted from their existing wired counterparts in many of the building applications including, but not limited to Heating, Ventilation, and Air Conditioning (HVAC), Lighting, Physical Security, Fire, and Elevator systems. These devices will be developed to reduce installation costs; while increasing installation and retrofit flexibility, as well as increasing the sensing fidelity to improve efficiency and building service quality. Martocci,(et al) Expires August 3, 2009 [Page 4] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 Sensing devices may be battery-less; battery or mains powered. Actuators and area controllers will be mains powered. Due to building code and/or device density (e.g. equipment room), it is envisioned that a mix of wired and wireless sensors and actuators will be deployed within a building. Facility Management Systems (FMS) are deployed in a large set of vertical markets including universities; hospitals; government facilities; Kindergarten through High School (K-12); pharmaceutical manufacturing facilities; and single-tenant or multi-tenant office buildings. These buildings range in size from 100K sqft structures (5 story office buildings), to 1M sqft skyscrapers (100 story skyscrapers) to complex government facilities such as the Pentagon. The described topology is meant to be the model to be used in all these types of environments, but clearly must be tailored to the building class, building tenant and vertical market being served. The following sections describe the sensor, actuator, area controller and zone controller layers of the topology. (NOTE: The Building Controller and Enterprise layers of the FMS are excluded from this discussion since they typically deal in communication rates requiring LAN/WLAN communication technologies). Section 3 describes FMS architectures commonly installed in commercial buildings. Section 4 describes installation methods deployed for new and remodeled construction. Appendix A documents important commercial building requirements that are out of scope for routing yet will be essential to the final acceptance of the protocols used within the building. Appendix B describes various FMS use-cases and the interaction with humans for energy conservation and life-safety applications. Sections 3, 4, Appendix A and Appendix B are mainly included for educational purposes. The aim of this document is to provide the set of IPv6 routing requirements for LLNs in buildings as described in Section 5. 3. Facility Management System (FMS) Topology 3.1. Introduction To understand the network systems requirements of a facility management system in a commercial building, this document uses a framework to describe the basic functions and composition of the system. An FMS is a hierarchical system of sensors, actuators, Martocci,(et al) Expires August 3, 2009 [Page 5] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 controllers and user interface devices based on spatial extent. Additionally, an FMS may also be divided functionally across alike, but different building subsystems such as HVAC, Fire, Security, Lighting, Shutters and Elevator control systems as denoted in Figure 1. Much of the makeup of an FMS is optional and installed at the behest of the customer. Sensors and actuators have no standalone functionality. All other devices support partial or complete standalone functionality. These devices can optionally be tethered to form a more cohesive system. The customer requirements dictate the level of integration within the facility. This architecture provides excellent fault tolerance since each node is designed to operate in an independent mode if the higher layers are unavailable. +------+ +-----+ +------+ +------+ +------+ +------+ Bldg App'ns | | | | | | | | | | | | | | | | | | | | | | | | Building Cntl | | | | | S | | L | | S | | E | | | | | | E | | I | | H | | L | Area Control | H | | F | | C | | G | | U | | E | | V | | I | | U | | H | | T | | V | Zone Control | A | | R | | R | | T | | T | | A | | C | | E | | I | | I | | E | | T | Actuators | | | | | T | | N | | R | | O | | | | | | Y | | G | | S | | R | Sensors | | | | | | | | | | | | +------+ +-----+ +------+ +------+ +------+ +------+ Figure 1: Building Systems and Devices Martocci,(et al) Expires August 3, 2009 [Page 6] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 3.2. Sensors/Actuators As Figure 1 indicates an FMS may be composed of many functional stacks or silos that are interoperably woven together via Building Applications. Each silo has an array of sensors that monitor the environment and actuators that effect the environment as determined by the upper layers of the FMS topology. The sensors typically are the fringe of the network structure providing environmental data into the system. The actuators are the sensor's counterparts modifying the characteristics of the system based on the input sensor data and the applications deployed. 3.3. Area Controllers An area describes a small physical locale within a building, typically a room. HVAC (temperature and humidity) and Lighting (room lighting, shades, solar loads) vendors oft times deploy area controllers. Area controls are fed by sensor inputs that monitor the environmental conditions within the room. Common sensors found in many rooms that feed the area controllers include temperature, occupancy, lighting load, solar load and relative humidity. Sensors found in specialized rooms (such as chemistry labs) might include air flow, pressure, CO2 and CO particle sensors. Room actuation includes temperature setpoint, lights and blinds/curtains. 3.4. Zone Controllers Zone Control supports a similar set of characteristics as the Area Control albeit to an extended space. A zone is normally a logical grouping or functional division of a commercial building. A zone may also coincidentally map to a physical locale such as a floor. Zone Control may have direct sensor inputs (smoke detectors for fire), controller inputs (room controllers for air-handlers in HVAC) or both (door controllers and tamper sensors for security). Like area/room controllers, zone controllers are standalone devices that operate independently or may be attached to the larger network for more synergistic control. 4. Installation Methods 4.1. Wired Communication Media Commercial controllers are traditionally deployed in a facility using twisted pair serial media following the EIA-485 electrical standard Martocci,(et al) Expires August 3, 2009 [Page 7] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 operating nominally at 38400 to 76800 baud. This allows runs to 5000 ft without a repeater. With the maximum of three repeaters, a single communication trunk can serpentine 15000 ft. EIA-485 is a multi-drop media allowing upwards to 255 devices to be connected to a single trunk. Most sensors and virtually all actuators currently used in commercial buildings are "dumb", non-communicating hardwired devices. However, sensor buses are beginning to be deployed by vendors which are used for smart sensors and point multiplexing. The Fire industry deploys addressable fire devices, which usually use some form of proprietary communication wiring driven by fire codes. 4.2. Device Density Device density differs depending on the application and as dictated by the local building code requirements. The following sections detail typical installation densities for different applications. 4.2.1. HVAC Device Density HVAC room applications typically have sensors/actuators and controllers spaced about 50ft apart. In most cases there is a 3:1 ratio of sensors/actuators to controllers. That is, for each room there is an installed temperature sensor, flow sensor and damper actuator for the associated room controller. HVAC equipment room applications are quite different. An air handler system may have a single controller with upwards to 25 sensors and actuators within 50 ft of the air handler. A chiller or boiler is also controlled with a single equipment controller instrumented with 25 sensors and actuators. Each of these devices would be individually addressed since the devices are mandated or optional as defined by the specified HVAC application. Air handlers typically serve one or two floors of the building. Chillers and boilers may be installed per floor, but many times service a wing, building or the entire complex via a central plant. These numbers are typical. In special cases, such as clean rooms, operating rooms, pharmaceuticals and labs, the ratio of sensors to controllers can increase by a factor of three. Tenant installations such as malls would opt for packaged units where much of the sensing and actuation is integrated into the unit. Here a single device address would serve the entire unit. Martocci,(et al) Expires August 3, 2009 [Page 8] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 4.2.2. Fire Device Density Fire systems are much more uniformly installed with smoke detectors installed about every 50 feet. This is dictated by local building codes. Fire pull boxes are installed uniformly about every 150 feet. A fire controller will service a floor or wing. The fireman's fire panel will service the entire building and typically is installed in the atrium. 4.2.3. Lighting Device Density Lighting is also very uniformly installed with ballasts installed approximately every 10 feet. A lighting panel typically serves 48 to 64 zones. Wired systems tether many lights together into a single zone. Wireless systems configure each fixture independently to increase flexibility and reduce installation costs. 4.2.4. Physical Security Device Density Security systems are non-uniformly oriented with heavy density near doors and windows and lighter density in the building interior space. The recent influx of interior and perimeter camera systems is increasing the security footprint. These cameras are atypical endpoints requiring upwards to 1 megabit/second (Mbit/s) data rates per camera as contrasted by the few Kbits/s needed by most other FMS sensing equipment. Previously, camera systems had been deployed on proprietary wired high speed network. More recent implementations utilize wired or wireless IP cameras integrated to the enterprise LAN. 4.3. Installation Procedure Wired FMS installation is a multifaceted procedure depending on the extent of the system and the software interoperability requirement. However, at the sensor/actuator and controller level, the procedure is typically a two or three step process. Most FMS equipment will utilize 24 VAC power sources that can be installed by a low-voltage electrician. He/she arrives on-site during the construction of the building prior to drywall and ceiling installation. This allows him/her to allocate wall space, easily land the equipment and run the wired controller and sensor networks. The Building Controllers and Enterprise network are not normally installed until months later. The electrician completes his task by running a wire verification procedure that shows proper continuity between the devices and proper local operation of the devices. Martocci,(et al) Expires August 3, 2009 [Page 9] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 Later in the installation cycle, the higher order controllers are installed, programmed and commissioned together with the previously installed sensors, actuators and controllers. In most cases the IP network is still not operable. The Building Controllers are completely commissioned using a crossover cable or a temporary IP switch together with static IP addresses. Once the IP network is operational, the FMS may optionally be added to the enterprise network. The wireless installation process must follow the same work flow. The electrician installs the products as before and executes local functional tests between the wireless device to assure operation before leaving the job. The electrician does not carry a laptop so the commissioning must be built into the device operation. 5. Building Automation Routing Requirements Following are the building automation routing requirements for a network used to integrate building sensor, actuator and control products. These requirements have been limited to routing requirements only. These requirements are written not presuming any preordained network topology, physical media (wired) or radio technology (wireless). See Appendix A for additional requirements that have been deemed outside the scope of this document yet will pertain to the successful deployment of building automation systems. 5.1. Installation Building control systems typically are installed and tested by electricians having little computer knowledge and no network knowledge whatsoever. These systems are often installed during the building construction phase before the drywall and ceilings are in place. For new construction projects, the building enterprise IP network is not in place during installation of the building control system. For retrofit applications, the installer will still operate independently from the IP network so as not to affect network operations during the installation phase. Local (ad hoc) testing of sensors and room controllers must be completed before the tradesperson can complete his/her work. This testing allows the tradesperson to verify correct client (e.g. light switch) and server (e.g. light ballast) before leaving the jobsite. In traditional wired systems correct operation of a light switch/ballast pair was as simple as flipping on the light switch. In wireless applications, the tradesperson has to assure the same Martocci,(et al) Expires August 3, 2009 [Page 10] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 operation, yet be sure the operation of the light switch is associated to the proper ballast. System level commissioning will later be deployed using a more computer savvy person with access to a commissioning device (e.g. a laptop computer). The completely installed and commissioned enterprise IP network may or may not be in place at this time. Following are the installation routing requirements. 5.1.1. Zero-Configuration Installation It MUST be possible to fully commission network devices without requiring any additional commissioning device (e.g. laptop). 5.1.2. Sleeping Devices Sensing devices will, in some cases, utilize battery power or energy harvesting techniques for power and will operate mostly in a sleep mode to maintain power consumption within a modest budget. The routing protocol MUST take into account device characteristics such as power budget. If such devices provide routing, rather than merely host connectivity, the energy costs associated with such routing needs to fit within the power budget. If the mechanisms for duty cycling dictate very long response times or specific temporal scheduling, routing will need to take such constraints into account. Typically, batteries need to be operational for at least 5 years when the sensing device is transmitting its data(e.g. 64 octets) once per minute. This requires that sleeping devices MUST have minimal link on time when they awake and transmit onto the network. Moreover, maintaining the ability to receive inbound data MUST be accomplished with minimal link on time. Proxies with unconstrained power budgets oft times are used to cache the inbound data for a sleeping device until the device awakens. In such cases, the routing protocol MUST discover the capability of a node to act as a proxy during path calculation; then deliver the packet to the assigned proxy for later delivery to the sleeping device upon its next awakened cycle. 5.1.3. Local Testing The local sensors and requisite actuators and controllers must be testable within the locale (e.g. room) to assure communication connectivity and local operation without requiring other systemic devices. Routing should allow for temporary ad hoc paths to be Martocci,(et al) Expires August 3, 2009 [Page 11] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 established that are updated as the network physically and functionally expands. 5.1.4. Device Replacement Replacement devices need to be plug-and-play with no additional setup compared to what is normally required for a new device. Devices referencing data in the replaced device MUST be able to reference data in its replacement without being reconfigured to refer to the new device. Thus, such a reference cannot be a hardware identifier, such as the MAC address, nor a hard-coded route. If such a reference is an IP address, the replacement device MUST be assigned the IP addressed previously bound to the replaced device. Or if the logical equivalent of a hostname is used for the reference, it must be translated to the replacement IP address. 5.2. Scalability Building control systems are designed for facilities from 50000 sq. ft. to 1M+ sq. ft. The networks that support these systems must cost-effectively scale accordingly. In larger facilities installation may occur simultaneously on various wings or floors, yet the end system must seamlessly merge. Following are the scalability requirements. 5.2.1. Network Domain The routing protocol MUST be able to support networks with at least 2000 nodes supporting at least 1000 routing devices and 1000 non- routing device. Subnetworks (e.g. rooms, primary equipment) within the network must support upwards to 255 sensors and/or actuators. 5.2.2. Peer-to-Peer Communication The data domain for commercial FMS systems may sprawl across a vast portion of the physical domain. For example, a chiller may reside in the facility's basement due to its size, yet the associated cooling towers will reside on the roof. The cold-water supply and return pipes serpentine through all the intervening floors. The feedback control loops for these systems require data from across the facility. Martocci,(et al) Expires August 3, 2009 [Page 12] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 A network device MUST be able to communicate in a peer-to-peer manner with any other device on the network. Thus, the routing protocol MUST provide routes between arbitrary hosts within the appropriate administrative domain. 5.3. Mobility Most devices are affixed to walls or installed on ceilings within buildings. Hence the mobility requirements for commercial buildings are few. However, in wireless environments location tracking of occupants and assets is gaining favor. Asset tracking applications require monitoring movement with granularity of a minute. This soft real-time performance requirement is reflected in the performance requirements below. 5.3.1. Mobile Device Requirements To minimize network dynamics, mobile devices SHOULD not be allowed to act as forwarding devices (routers) for other devices in the LLN. A mobile device that moves within an LLN SHOULD reestablish end-to- end communication to a fixed device also in the LLN within 2 seconds. The network convergence time should be less than 5 seconds once the mobile device stops moving. A mobile device that moves outside of an LLN SHOULD reestablish end- to-end communication to a fixed device in the new LLN within 5 seconds. The network convergence time should be less than 5 seconds once the mobile device stops moving. A mobile device that moves outside of one LLN into another LLN SHOULD reestablish end-to-end communication to a fixed device in the old LLN within 10 seconds. The network convergence time should be less than 10 seconds once the mobile device stops. A mobile device that moves outside of one LLN into another LLN SHOULD reestablish end-to-end communication to another mobile device in the new LLN within 20 seconds. The network convergence time should be less than 30 seconds once the mobile devices stop moving. A mobile device that moves outside of one LLN into another LLN SHOULD reestablish end-to-end communication to a mobile device in the old LLN within 30 seconds. The network convergence time should be less than 30 seconds once the mobile devices stop moving. Martocci,(et al) Expires August 3, 2009 [Page 13] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 5.4. Resource Constrained Devices Sensing and actuator device processing power and memory may be 4 orders of magnitude less (i.e. 10,000x) than many more traditional client devices on an IP network. The routing mechanisms must therefore be tailored to fit these resource constrained devices. 5.4.1. Limited Processing Power for Non-routing Devices. The software size requirement for non-routing devices (e.g. sleeping sensors and actuators) SHOULD be implementable in 8-bit devices with no more than 128KB of memory. 5.4.2. Limited Processing Power for Routing Devices The software size requirements for routing devices (e.g. room controllers) SHOULD be implementable in 8-bit devices with no more than 256KB of flash memory. 5.5. Addressing Facility Management systems require different communication schemes to solicit or post network information. Broadcasts or anycasts need be used to resolve unresolved references within a device when the device first joins the network. As with any network communication, broadcasting should be minimized. This is especially a problem for small embedded devices with limited network bandwidth. In many cases a global broadcast could be replaced with a multicast since the application knows the application domain. Broadcasts and multicasts are typically used for network joins and application binding in embedded systems. 5.5.1. Unicast/Multicast/Anycast Routing MUST support anycast, unicast, and multicast. 5.6. Manageability In addition to the initial installation of the system (see Section 5.1), it is equally important for the ongoing maintenance of the system to be simple and inexpensive. Martocci,(et al) Expires August 3, 2009 [Page 14] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 5.6.1. Diagnostics To improve diagnostics, the network layer SHOULD be able to be placed in and out of 'verbose' mode. Verbose mode is a temporary debugging mode that provides additional communication information including at least total number of routed packets sent and received, number of routing failures (no route available), neighbor table members, and routing table entries. 5.6.2. Route Tracking Route diagnostics SHOULD be supported providing information such as path quality; number of hops; available alternate active paths with associated costs. Path quality is the relative measure of 'goodness' of the selected source to destination path as compared to alternate paths. This composite value may be measured as a function of hop count, signal strength, available power, existing active paths or any other criteria deemed by ROLL as the path cost differentiator. 5.7. Route Selection Route selection determines reliability and quality of the communication paths among the devices. Optimizing the routes over time resolve any nuances developed at system startup when nodes are asynchronously adding themselves to the network. Path adaptation will reduce latency if the path costs consider hop count as a cost attribute. 5.7.1. Path Cost The routing protocol MUST support a metric of route quality and optimize path selection according to such metrics within constraints established for links along the paths. These metrics SHOULD reflect metrics such as signal strength, available bandwidth, hop count, energy availability and communication error rates. 5.7.2. Path Adaptation Communication paths MUST adapt toward the chosen metric(s) (e.g. signal quality) optimality in time. Martocci,(et al) Expires August 3, 2009 [Page 15] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 5.7.3. Route Redundancy The routing layer SHOULD be configurable to allow secondary and tertiary paths to be established and used upon failure of the primary path. 5.7.4. Route Discovery Time Mission critical commercial applications (e.g. Fire, Security) require reliable communication and guaranteed end-to-end delivery of all messages in a timely fashion. Application layer time-outs must be selected judiciously to cover anomalous conditions such as lost packets and/or path discoveries; yet not be set too large to over damp the network response. If route discovery occurs during packet transmission time, it SHOULD NOT add more than 120ms of latency to the packet delivery time. 5.7.5. Route Preference Route cost algorithms SHOULD allow the installer to optionally select 'preferred' paths based on the known spatial layout of the communicating devices. 6. Traffic Pattern The independent nature of the automation systems within a building plays heavy onto the network traffic patterns. Much of the real-time sensor data stays within the local environment. Alarming and other event data will percolate to higher layers. Systemic data may be either polled or event based. Polled data systems will generate a uniform packet load on the network. This architecture has proven not scalable. Most vendors have developed event based systems which pass data on event. These systems are highly scalable and generate low data on the network at quiescence. Unfortunately, the systems will generate a heavy load on startup since all the initial data must migrate to the controller level. They also will generate a temporary but heavy load during firmware upgrades. This latter load can normally be mitigated by performing these downloads during off-peak hours. Devices will need to reference peers occasionally for sensor data or to coordinate across systems. Normally, though, data will migrate from the sensor level upwards through the local, area then supervisory level. Bottlenecks will typically form at the funnel point from the area controllers to the supervisory controllers. Martocci,(et al) Expires August 3, 2009 [Page 16] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 Initial system startup after a controlled outage or unexpected power failure puts tremendous stress on the network and on the routing algorithms. An FMS system is comprised of a myriad of control algorithms at the room, area, zone, and enterprise layers. When these control algorithms are at quiescence, the real-time data changes are small and the network will not saturate. However, upon any power loss, the control loops and real-time data quickly atrophy. A ten minute outage may take many hours to regain control. Upon restart all lines-powered devices power-on instantaneously. However due to application startup and self tests, these devices will attempt to join the network randomly. Empirical testing indicates that routing paths acquired during startup will tend to be very oblique since the available neighbor lists are incomplete. This demands an adaptive routing protocol to allow for path optimization as the network stabilizes. 7. Security Considerations Security policies, especially wireless encryption and device authentication needs to be considered, especially with concern to the impact on the processing capabilities and additional latency incurred on the sensors, actuators and controllers. FMS systems are typically highly configurable in the field and hence the security policy is most often dictated by the type of building to which the FMS is being installed. Single tenant owner occupied office buildings installing lighting or HVAC control are candidates for implementing low or even no security on the LLN. Antithetically, military or pharmaceutical facilities require strong security policies. As noted in the installation procedures above, security policies must be facile to allow no security during the installation phase (prior to building occupancy), yet easily raise the security level network wide during the commissioning phase of the system. 7.1. Security Requirements 7.1.1. Authentication Authentication SHOULD be optional on the LLN. Authentication SHOULD be fully configurable on-site. Authentication policy and updates MUST be routable over-the-air. Authentication SHOULD occur upon joining or rejoining a network. However, once authenticated devices SHOULD Martocci,(et al) Expires August 3, 2009 [Page 17] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 NOT need to reauthenticate with any other devices in the LLN. Packets may need authentication at the source and destination nodes, however, packets routed through intermediate hops should not need reauthentication at each hop. 7.1.2. Encryption 7.1.2.1. Encryption Types Data encryption of packets MUST optionally be supported by use of either a network wide key and/or application key. The network key would apply to all devices in the LLN. The application key would apply to a subset of devices on the LLN. The network key and application keys would be mutually exclusive. The routing protocol MUST allow routing a packet encrypted with an application key through forwarding devices that without requiring each node in the path have the application key. 7.1.2.2. Packet Encryption The encryption policy MUST support encryption of the payload only or the entire packet. Payload only encryption would eliminate the decryption/re-encryption overhead at every hop providing more real- time performance. 7.1.3. Disparate Security Policies Due to the limited resources of an LLN, the security policy defined within the LLN MUST be able to differ from that of the rest of the IP network within the facility yet packets MUST still be able to route to or through the LLN from/to these networks. 7.1.4. Routing Security Policies To Sleeping Devices The routing protocol MUST gracefully handle routing temporal security updates (e.g. dynamic keys) to sleeping devices on their 'awake' cycle to assure that sleeping devices can readily and efficiently access then network. Martocci,(et al) Expires August 3, 2009 [Page 18] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 8. IANA Considerations This document includes no request to IANA. 9. Acknowledgments In addition to the authors, J. P. Vasseur, David Culler, Ted Humpal and Zach Shelby are gratefully acknowledged for their contributions to this document. This document was prepared using 2-Word-v2.0.template.dot. 10. References 10.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. 10.2. Informative References [I-D.ietf-roll-terminology] Vasseur, J., "Terminology in Low power And Lossy Networks", draft-ietf-roll-terminology-00 (work in progress), October 2008. Martocci,(et al) Expires August 3, 2009 [Page 19] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 11. Appendix A: Additional Building Requirements Appendix A contains additional building requirements that were deemed out of scope for ROLL, yet provided ancillary substance for the reader. 11.1. Additional Commercial Product Requirements 11.1.1. Cost The total installed infrastructure cost including but not limited to the media, required infrastructure devices (amortized across the number of devices); labor to install and commission the network must not exceed $1.00/foot for wired implementations. Wireless implementations (total installed cost) must cost no more than 80% of wired implementations. 11.1.2. Wired and Wireless Implementations Vendors will likely not develop a separate product line for both wired and wireless networks. Hence, the solutions set forth must support both wired and wireless implementations. 11.1.3. World-wide Applicability Wireless devices must be supportable at the 2.4Ghz ISM band. Wireless devices should be supportable at the 900 and 868 ISM bands as well. 11.1.4. Support of Application Layer Protocols 11.1.4.1. BACnet Building Protocol BACnet is an ISO world-wide application layer IP protocol. Devices implementing ROLL routing protocol should support the BACnet protocol. Martocci,(et al) Expires August 3, 2009 [Page 20] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 11.1.5. Use of Constrained Devices The network may be composed of a heterogeneous mix of full, battery and energy harvested powered devices. The routing protocol must support these constrained devices. 11.1.5.1. Energy Harvested Sensors Devices utilizing available ambient energy (e.g. solar, air flow, temperature differential)for sensing and communicating should be supported by the solution set. 11.2. Additional Installation and Commissioning Requirements 11.2.1. Device Setup Time Device and Network setup by the installer must take no longer than 20 seconds per device installed. 11.2.2. Unavailability of an IP network Product commissioning must be performed by an application engineer prior to the installation of the IP network (e.g. switches, routers, DHCP, DNS). 11.3. Additional Network Requirements 11.3.1. TCP/UDP Connection based and connectionless services must be supported 11.3.2. Interference Mitigation The network must automatically detect interference and seamlessly migrate the network hosts channel to improve communication. Channel changes and nodes response to the channel change must occur within 60 seconds. 11.3.3. Real-time Performance Measures A node transmitting a 'request with expected reply' to another node must send the message to the destination and receive the response in not more than 120 msec. This response time should be achievable with Martocci,(et al) Expires August 3, 2009 [Page 21] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 5 or less hops in each direction. This requirement assumes network quiescence and a negligible turnaround time at the destination node. 11.3.4. Packet Reliability Reliability must meet the following minimum criteria : < 1% MAC layer errors on all messages; After no more than three retries < .1% Network layer errors on all messages; After no more than three additional retries; < 0.01% Application layer errors on all messages. Therefore application layer messages will fail no more than once every 100,000 messages. 11.3.5. Merging Commissioned Islands Subsystems are commissioned by various vendors at various times during building construction. These subnetworks must seamlessly merge into networks and networks must seamlessly merge into internetworks since the end user wants a holistic view of the system. 11.3.6. Adjustable System Table Sizes Routing must support adjustable router table entry sizes on a per node basis to maximize limited RAM in the devices. 11.3.7. Communication Distance A source device may be upwards to 1000 feet from its destination. Communication may need to be established between these devices without needing to install other intermediate 'communication only' devices such as repeaters. 11.3.8. Automatic Gain Control For wireless implementations, the device radios should incorporate automatic transmit power regulation to maximize packet transfer and minimize network interference regardless of network size or density. Martocci,(et al) Expires August 3, 2009 [Page 22] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 11.3.9. IPv4 Compatibility The routing protocol must support cost-effective intercommunication among IPv4 and IPv6 devices. 11.3.10. Proxying for Sleeping Devices Routing must support in-bound packet caches for low-power (battery and energy harvested) devices when these devices are not accessible on the network. These devices must have a designated powered proxying device to which packets will be temporarily routed and cached until the constrained device accesses the network. 11.3.11. Device and Network Integrity Commercial Building devices must all be periodically scanned to assure that the device is viable and can communicate data and alarm information as needed. Network routers should maintain previous packet flow information temporally to minimize overall network overhead. 11.4. Additional Performance Requirements 11.4.1. Data Rate Performance An effective data rate of 20kbits/s is the lowest acceptable operational data rate acceptable on the network. 11.4.2. Firmware Upgrades To support high speed code downloads, routing MUST support transports that provide parallel downloads to targeted devices yet guarantee packet delivery. In cases where the spatial position of the devices requires multiple hops, the algorithm must recurse through the network until all targeted devices have been serviced. Devices receiving a download MAY cease normal operation, but upon completion of the download must automatically resume normal operation. 11.4.3. Prioritized Routing Network and application routing prioritization is required to assure that mission critical applications (e.g. Fire Detection) cannot be deferred while less critical application access the network. Martocci,(et al) Expires August 3, 2009 [Page 23] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 11.4.4. Path Persistence To eliminate high network traffic in power-fail or brown-out conditions previously established routes SHOULD be remembered and invoked prior to establishing new routes for those devices reentering the network. 11.5. Additional Network Security Requirements 11.5.1. Encryption Levels Encryption SHOULD be optional on the LLN. Encryption SHOULD be fully configurable on-site. Encryption policy and updates SHOULD be transmittable over-the-air and in-the-clear. 11.5.2. Security Policy Flexibility In most facilities authentication and encryption will be turned off during installation. More complex encryption policies might be put in force at commissioning time. New encryption policies MUST be allowed to be presented to all devices in the LLN over the network without needing to visit each device. 12. Appendix B: FMS Use-Cases Appendix B contains FMS use-cases that describes the use of sensors and controllers for various applications with a commercial building and how they interplay with energy conservation and life-safety applications. The Vooruit arts centre is a restored monument which dates from 1913. This complex monument consists of over 350 different rooms including a meeting rooms, large public halls and theaters serving as many as 2500 guests. A number of use cases regarding Vooruit are described in the following text. The situations and needs described in these use cases can also be found in all automated large buildings, such as airports and hospitals. Martocci,(et al) Expires August 3, 2009 [Page 24] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 12.1. Locking and Unlocking the Building The member of the cleaning staff arrives first in the morning unlocking the building (or a part of it) from the control room. This means that several doors are unlocked; the alarms are switched off; the heating turns on; some lights switch on, etc. Similarly, the last person leaving the building has to lock the building. This will lock all the outer doors, turn the alarms on, switch off heating and lights, etc. The "building locked" or "building unlocked" event needs to be delivered to a subset of all the sensors and actuators. It can be beneficial if those field devices form a group (e.g. "all-sensors- actuators-interested-in-lock/unlock-events). Alternatively, the area and zone controllers could form a group where the arrival of such an event results in each area and zone controller initiating unicast or multicast within the LLN. This use case is also described in the home automation, although the requirement about preventing the "popcorn effect" I-D.ietf-roll-home- routing-reqs] can be relaxed a bit in building automation. It would be nice if lights, roll-down shutters and other actuators in the same room or area with transparent walls execute the command around (not 'at') the same time (a tolerance of 200 ms is allowed). 12.2. Building Energy Conservation A room that is not in use should not be heated, air conditioned or ventilated and the lighting should be turned off or dimmed. In a building with many rooms it can happen quite frequently that someone forgets to switch off the HVAC and lighting, thereby wasting valuable energy. To prevent this occurrence, the facility manager might program the building according to the day's schedule. This way lighting and HVAC is turned on prior to the use of a room, and turned off afterwards. Using such a system Vooruit has realized a saving of 35% on the gas and electricity bills. 12.3. Inventory and Remote Diagnosis of Safety Equipment Each month Vooruit is obliged to make an inventory of its safety equipment. This task takes two working days. Each fire extinguisher (100), fire blanket (10), fire-resistant door (120) and evacuation plan (80) must be checked for presence and proper operation. Also the battery and lamp of every safety lamp must be checked before each public event (safety laws). Automating this process using asset tracking and low-power wireless technologies would reduce a heavy burden on working hours. Martocci,(et al) Expires August 3, 2009 [Page 25] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 It is important that these messages are delivered very reliably and that the power consumption of the sensors/actuators attached to this safety equipment is kept at a very low level. 12.4. Life Cycle of Field Devices Some field devices (e.g. smoke detectors) are replaced periodically. The ease by which devices are added and deleted from the network is very important to support augmenting sensors/actuators during construction. A secure mechanism is needed to remove the old device and install the new device. New devices need to be authenticated before they can participate in the routing process of the LLN. After the authentication, zero-configuration of the routing protocol is necessary. 12.5. Surveillance Ingress and egress are real-time applications needing response times below 500msec, for example for cardkey authorization. It must be possible to configure doors individually to restrict use on a per person basis with respect to time-of-day and person entering. While much of the surveillance application involves sensing and actuation at the door and communication with the centralized security system, other aspects, including tamper, door ajar, and forced entry notification, are to be delivered to one or more fixed or mobile user devices within 5 seconds. 12.6. Emergency In case of an emergency it is very important that all the visitors be evacuated as quickly as possible. The fire and smoke detectors set off an alarm and alert the mobile personnel on their user device (e.g. PDA). All emergency exits are instantly unlocked and the emergency lighting guides the visitors to these exits. The necessary sprinklers are activated and the electricity grid monitored if it becomes necessary to shut down some parts of the building. Emergency services are notified instantly. A wireless system could bring in some extra safety features. Locating fire fighters and guiding them through the building could be a life-saving application. Martocci,(et al) Expires August 3, 2009 [Page 26] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 These life critical applications ought to take precedence over other network traffic. Commands entered during these emergencies have to be properly authenticated by device, user, and command request. 12.7. Public Address It should be possible to send audio and text messages to the visitors in the building. These messages can be very diverse, e.g. ASCII text boards displaying the name of the event in a room, audio announcements such as delays in the program, lost and found children, evacuation orders, etc. The control network is expected be able to readily sense the presence of an audience in an area and deliver applicable message content. Martocci,(et al) Expires August 3, 2009 [Page 27] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 Authors' Addresses Jerry Martocci Johnson Control 507 E. Michigan Street Milwaukee, Wisconsin, 53202 USA Phone: 414.524.4010 Email: jerald.p.martocci@jci.com Nicolas Riou Schneider Electric Technopole 38TEC T3 37 quai Paul Louis Merlin 38050 Grenoble Cedex 9 France Phone: +33 4 76 57 66 15 Email: nicolas.riou@fr.schneider-electric.com Pieter De Mil Ghent University - IBCN G. Crommenlaan 8 bus 201 Ghent 9050 Belgium Phone: +32-9331-4981 Fax: +32--9331--4899 Email: pieter.demil@intec.ugent.be Wouter Vermeylen Arts Centre Vooruit ??? Ghent 9000 Belgium Phone: ??? Fax: ??? Martocci,(et al) Expires August 3, 2009 [Page 28] Internet-Draft draft-ietf-roll-building-routing-reqs February 2009 Email: wouter@vooruit.be Martocci,(et al) Expires August 3, 2009 [Page 29]