Abstract

Heating, ventilation, and air conditioning systems in large buildings frequently feature a network topology wherein the outputs of each dynamic subsystem act as disturbances to other subsystems. The distributed optimization technique presented in this paper leverages this topology without requiring a centralized controller or widespread knowledge of the interaction dynamics between subsystems. Each subsystem's controller calculates an optimal steady state condition. The output corresponding to this condition is then communicated to downstream neighbors only. Similarly, each subsystem communicates to its upstream neighbors the predicted costs imposed by the neighbors' own calculated outputs. By judicious construction of the cost functions, all of the cost information is propagated through the network, allowing a Pareto optimal solution to be reached. The novelty of this approach is that communication between all plants is not necessary to achieve a global optimum. Since each optimizer does not require knowledge of its neighbors' dynamics, changes in one controller do not require changes to all controllers in the network. Proofs of convergence to Pareto optimality under certain conditions are presented, and convergence under the approach is demonstrated with a simulation example. The approach is also applied to a laboratory-based water chiller system; several experiments demonstrate the features of the approach and potential for energy savings.

Introduction

Commercial building energy systems are interconnected systems, often with centralized monitoring, although the sensors are widely distributed and individual components are independently controlled. Thus, the typical building is a “system-of-systems” with dynamics that evolve on multiple time scales with many varied performance objectives. An example of a building heating, ventilation, and air conditioning (HVAC) system is shown in part in Fig. 1. Multiple chillers utilize a refrigerant-based vapor compression cycle (VCC) to cool a secondary fluid (e.g., water), which is pumped to various devices or zones of the building, where heat exchangers in the air handling units cool the tertiary fluid, air. A network of fans and ducts then carry the cooled air to the desired locations. To reject waste heat, water is used to transport the heat produced by the condensers to cooling towers, where forced air convection removes heat to the outdoor environment. The chillers, which use a VCC, are the heart of this system; they are themselves interconnected systems consisting of multiple heat exchangers, expansion valves, and compressors. Their dynamics are driven by fluctuating pressures of the two-phase fluid in the heat exchangers, and fluctuating refrigerant mass flow rates through the compressor and expansion valve. Standard practice employs separate, independent controllers for each of these varied components.

An additional consideration of building systems control is the constantly changing nature of the building dynamics. Components fail, and are replaced with different models. Weather creates daily and seasonal changes in operating conditions. Buildings age and windows leak. People come and go; windows and doors are opened and closed. Rooms are reconfigured. Occupants change thermostat settings, and building operators override system controls. Maintaining a single highly detailed and accurate model of all system behaviors, or updating a centralized controller every time a component is replaced or reconfigured is infeasible; for this reason, decentralized, independent control strategies are standard practice. Likewise, fully communicative distributed control approaches place unrealistic demands on the communication and computation infrastructure. Determining a modular, flexible control approach that still achieves the optimality of a centralized control solution is critical to achieving a practical control solution for modern building systems.

Literature Review.

Building operations account for approximately 40% of U.S. energy usage and carbon emissions [1,2], and 75% of peak electrical demand [3]. Although there certainly have been copious previous investigations into building energy systems, these efforts generally focus on HVAC component-level dynamics and control. In contrast, this work proposes a system-level optimal control approach. Moreover, the distributed nature of the approach retains the flexibility and modularity necessary given the constantly changing nature of building systems and operations. This approach is inspired by the model predictive control (MPC) algorithm, which has grown out of the needs of the process industries. Widely referenced overviews of this algorithm can be found in Refs. [4–7]. The dynamic systems in these industries (e.g., chemical plants) tend to be highly complex and nonlinear, making centralized control unrealistic due to modeling complexities and optimization difficulties. These characteristics are also prevalent in large building systems. The application of MPC to refrigeration cycles has been studied in Refs. [8–10], and MPC has been applied to air handling units and other components of large building systems [11–13].

The control systems for intelligent buildings and refrigeration systems are generally decentralized, rather than having one controller handling all functions. In decentralized MPC, each subsystem has an individual cost function that the controller seeks to minimize. The interactions with the other subsystems are ignored, although in any MPC optimization disturbances must be accounted for in order to achieve error-free reference tracking [14]. If the interactions between subsystems are significant, however, decentralized MPC can have the same interaction problems as any other decentralized control approach, since each controller has no awareness of the interconnection dynamics between plants.

An improvement on decentralized MPC is the approach commonly referred to as distributed MPC (DMPC), first proposed by Jia and Krogh in Ref. [15]. In this architecture, each interconnected subsystems has its own controller and individual cost function to optimize. However, the controllers communicate to each other their predicted control inputs and plant outputs (delayed by one time step). This gives each controller a better idea of future disturbances, allowing better prediction and therefore better performance. Venkat et al. observed that in DMPC, where each plant minimizes a different cost function rather than a shared function, the system can approach a condition where each individual subsystem cannot unilaterally lower the cost function any further, thus the global optimum is not achieved [16]. This condition is commonly referred to as a Nash equilibrium [17]. The authors suggested an approach where all of the distributed controllers are seeking to optimize the same cost function, naming this feasible cooperation-based MPC [18,19]. The solutions to each optimization are communicated between controllers and iterated, and the solution is shown to approach the centralized controller's optimum. A more detailed treatment of the algorithm can be found in Ref. [20], and the algorithm was extended to nonlinear systems in Ref. [21]. Other cooperative algorithms with a focus on agent negotiation and game theory can be found in Refs. [22] and [23]. This approach was further developed to allow setpoint tracking (rather than just regulation) in Ref. [24].

Another important field of effort in the DMPC arena has been separation of the plants and development of the objective functions. For example, in Ref. [25], the authors used the Dantzig–Wolfe decomposition to break up the optimization problem; this decomposition was used to implement a coordinated decentralized MPC. A similar paper carried out a distributed optimization by breaking the quadratic programming (QP) problem into a set of local subproblems and applied the approach to traffic networks [26]. A method for partitioning large systems based upon graph theory and geared toward MPC was reported in Ref. [27]; the chosen application for this technique was a drinking water network. In Ref. [28], a modeling scheme was developed that treated coupling of the subsystem models as the tuning parameter. Survey articles have begun to collect and categorize the different approaches to DMPC; a comparison of different optimization methods is given in Ref. [29], and a recent survey of the state of the art in DMPC is presented in Ref. [30].

Many of these DMPC approaches require local knowledge of the interaction and plant dynamics of all other subsystems, and do not account for the possibility of different interconnection topologies simplifying the architecture. Regardless of how the plants are interconnected, each controller must communicate with every other one. Additionally, since each DMPC controller generally has a model of each plant in the system, if one of the plants is changed, every controller in the network must be changed as well. Because of these practical limitations, many of these DMPC approaches may not be useful for intelligent building systems, although simplified cases have been explored via simulation in Refs. [31–34].

This creates the potential for research into leveraging particular network topologies into more effective, specialized control architectures, since in a typical HVAC application there are hundreds of controllers spread out over a very large physical area. Progress is currently being made in this area. For example, to address the problem of serially connected systems, a DMPC scheme specially constructed for this topology was presented in Ref. [35]. To reduce the communication load between controllers, a noncooperative algorithm that only requires neighbor-to-neighbor communication was presented in Ref. [36].

A recent paper proposed an approach that used the sensitivities of the local cost functions to the actions of their neighbors in computing local controller actions [37]. This required that each subsystem have knowledge of its neighbors' dynamics, however, since the sensitivity was calculated with respect to the neighbors' inputs. In contrast, the architecture proposed in this paper only requires the disturbance signals and a simple cost term to be communicated between controllers, so the local controllers need only have knowledge of the local plant's behavior.

The distributed optimization approach presented in this paper, called neighbor-communication optimization (NC-OPT), was suggested by the authors in Ref. [38]; this paper serves to further refine that approach, and present guarantees of convergence and optimality. The NC-OPT algorithm requires each subsystem to have knowledge of its own plant only, and communicates only with its immediate neighbors. The local cost functions are constructed so that a centralized optimum can be reached iteratively.

The rest of the paper is organized as follows. Mathematical preliminaries and definitions of the various terms used throughout the paper are presented. Then, the cost functions and model structures for the local controllers are given, and the algorithm is detailed. Proofs for stability of the optimization and convergence to the centralized optimum are presented. The architecture is then applied to an experimental refrigeration system, and results of the design are given with discussion of potential energy savings.

Preliminaries

Topology.

This section will present the terms and definitions used throughout the paper to describe the NC-OPT approach. Consider a group of p dynamical subsystems, each denoted by σi and described by the following discrete-time difference equations:
(1)

In this equation, the state variables are xi, the control inputs are ui, and the regulated or tracked outputs are yi. If there are outputs that act as disturbances to other subsystems in the group, these are designated zi and will be referred to as network outputs. If there are disturbances coming in from other subsystems in the group, these disturbance signals are designated vi and referred to as network inputs. Disturbances that are exogenous with respect to the network as a whole are designated di. The right-hand superscripts t and t + 1 represent the current and next sampling instant, respectively. Refer to Fig. 2.

The reader should note that although yi and zi are treated as different signals for the purposes of constructing the NC-OPT algorithm, they need not be different physical quantities. This will be illustrated with an example.

Definition 1. For the group of subsystems described above, stack the network input and output vectors to form system-wide vectors, thus
This group of subsystems is said to be an “NC-OPT network” ifa nv × nzinterconnection matrixΦsuch that
(2)
Remark. The construction of the interconnection matrix is chosen such that for every element of each disturbance output zi in a “NC-OPT network”, there must be exactly one corresponding element in a disturbance input vi. Thus, nv=nz, and the interconnection matrix is an orthonormal basis in Rnv. To aid in exploring the topology of the network in question, the interconnection matrix can be expressed as a block matrix, resulting in the following breakdown:
(3)
Definition 2. Two subsystemsσiandσjin a network are “neighbors” if one or both of the following are true:

Remark. Definition 2 states that two plants are neighbors if a network output of one plant acts as a network input to the other plant. Note that the terms “upstream” and “downstream” neighbors refer to how signal information flows between physical systems (i.e., logical topology of the network), and does not necessarily refer to physical location (i.e., physical topology of the network). Thus, two interacting plants will simultaneously be both upstream and downstream neighbors, and thus the topology is generalizable to all types of physical interconnections between systems. This is illustrated in the following example.

Example. Consider the set of neighboring rooms, each with an independent temperature controller, presented in Fig. 3. Each room is a neighbor to the immediately adjacent room, since the temperature difference between rooms affects the heat transfer between them. Thus, a temperature difference between two neighbors will act as a disturbance on both rooms. Additionally, the outside temperature acts as an exogenous disturbance each of the rooms.

The outputs tracked by the local controllers are the respective room temperatures; thus, the tracking output vector y for the network is
(4)
Not only are the temperatures the tracking outputs, they also are the network outputs for each subsystem, since heat transfer between rooms will affect the control effort required to track the temperature setpoints. Since some of the temperatures affect more than one room, some of the zi terms are given as vectors, to ensure the orthonormal representation of the interconnection matrix, thus
(5)
The relationship between the network outputs and network inputs—the network topology—is then described through the interconnection matrix
(6)

This example illustrates the concepts used in the NC-OPT architecture: the idea of neighbors, the distribution of network outputs and inputs, and the handling of complex topologies. Additionally, it shows that the same physical signal can be used as both a tracking and network output, and that a signal can serve as a network output to multiple neighbors. Section 2.2 will present the centralized cost function that defines the ultimate goal of the architecture, as well as the system models for which this function can be used.

Cost Functions.

The thrust of this work is a technique for achieving a system-level, steady state optimal operating condition without a centralized controller. Pursuant to this objective, a system-level or centralized cost function is now defined that defines the desired condition. This cost function is a weighted quadratic cost function expressed in terms of steady state control inputs and the resulting steady state tracking errors (deviations from user-desired setpoints). Hereinafter an overbar will be used to indicate the steady state value of a discrete-time signal, e.g., y¯i is the steady state vector of the regulated outputs of subsystem σi.

The steady state tracking error for subsystem σi is defined as
(7)
In which ri is the setpoint reference, e¯i is the error, and y¯i is the tracked input as described earlier. The reference signals are assumed to be constant for the next sampling period of the controller, and so the overbar is omitted. These signals, as well as the control input vectors, can be stacked to create system-wide vectors, as before
(8)
These vectors are the steady state tracking outputs, reference signals, steady state error signals, and steady state control inputs, respectively. A steady state centralized cost function can be defined with them
(9)
The weight matrices used are block diagonal matrices that weight the error and control inputs, and consist of weighting matrices separated by subsystem thus
(10)

Models.

The work presented in this paper will assume that the discrete-time systems are linear time invariant, thus Eq. (1) becomes
(11)
The NC-OPT architecture calculates the predicted steady state optimum for each of the plants. The predicted steady state outputs for subsystem σi for a set of inputs u¯i, v¯i, and d¯i are
(12)
(13)
To complete the steady state description of an NC-OPT network, define the following stacked vectors: exogenous disturbances, steady state network inputs, and steady state network outputs. The exogenous disturbances are assumed to be constant over the next sampling period
(14)
Similarly, define the following block diagonal matrices:
(15)
Thus, we can define the entire system's steady state condition with the following relationships:
(16)
Using the interconnection matrix detailed earlier, these relationships can be expressed as linear functions of control inputs u¯ and disturbances d¯
(17)
Substitution into Eq. (16) yields
(18)
(19)
Substitution of these relationships into the centralized cost function yields
(20)

This is the centralized cost function that can be minimized in terms of u¯ in order to obtain the system-wide optimal condition, which hereinafter will be referred to as the Pareto optimum. The objective of the NC-OPT architecture is to use a distributed set of controllers to locally find the optimal u¯i values. The controllers will have no knowledge of the other plants' dynamics, and will only communicate with their immediate neighbors. Section 3 of the paper will describe the local cost functions and the algorithm that will use them to meet these requirements.

NC-OPT

In general, local controllers will not reach an equilibrium position that is globally optimal. In order for distributed controllers to achieve the global optimum, each controller must have accurate information about the actions of the other controllers to account for cross-coupling effects.

The central idea in the NC-OPT approach is that every plant only communicates with its immediate neighbor, and each controller has no knowledge of any plant model other than its own. Each controller must communicate with its neighbors as follows:

  1. (1)

    Sending predicted network outputs to downstream neighbors.

  2. (2)

    Similarly, receiving predicted network inputs from upstream neighbors.

  3. (3)

    Sending to upstream neighbors the changes in its own cost function cost created by the network inputs coming from those neighbors.

  4. (4)

    Receiving the same cost function information from downstream neighbors.

Since, after sufficient iteration, the local plants will take into account the effects of their own actions on the network as a whole, the optimizers are solving the centralized cost function and the Pareto optimum will be reached. In practice, the controllers will often be paired with local component dynamic controllers; in this case, the NC-OPT control inputs u¯i will be setpoints for the lower level controllers to meet.

This section will present the local cost functions used in the NC-OPT architecture, and discuss the nature of the communication between plants, including a detailed presentation of the algorithm required.

Decentralized Cost Function.

The local cost function that is minimized for plant σi is
(21)

The first two terms of Eq. (21) comprise the standard decentralized cost function, with knowledge of network inputs coming from neighbors (refer to Eq. (11)). These do not apply a cost due to the effects that the network outputs z¯i have on the subsystem's downstream neighbors; however, so a third term ψiTz¯i is added. The vector ψi consists of the sensitivities of the downstream neighbors' cost functions to the predicted network outputs z¯i, and is communicated to the controller by the downstream subsystems. Likewise, the subsystem must calculate the effect that the network inputs (v¯i) have on its own cost, and pass this cost to the neighbors inflicting these inputs. These effects are calculated as first-order sensitivities to the incoming inputs, and are designated as a vector γi, a linear function of v¯i, u¯i, and ψi.

Begin by expanding the cost function
(22)
Now calculate the first-order sensitivity of the cost function to changes in the network inputs
(23)
The elements in this vector will be passed along to the neighbors creating the relevant disturbances, informing these neighbors of the costs imposed by their actions. The distribution of this information throughout the network is described using the interconnection matrix Φ
(24)

Thus, the cost functions that each subsystem seeks to minimize have been defined, as well as the information that will be passed between neighbors. Note that each subsystem is engaged in two way communication with each of its neighbors. Expected outputs are communicated downstream (and received from upstream), and resulting cost penalties are communicated upstream (and received from downstream). The algorithm detailing the method of communication is detailed next.

Algorithm.

The global optimization is distributed among the individual subsystems; in order to achieve a global optimum, an iterative approach is required. The indexing variable c is used for the iteration count, and should not be confused with the plant's time value, t. At predetermined time intervals (controller sampling rate), the algorithm proceeds thus for a prescribed number of iterations, cmax:

Step 0. Initialize variables for controller iteration (not plant) step j = 0. Each controller assigns initial values to v¯i, u¯i, and ψi; these are generally the last held value from the previous time step, or from sensor measurements as they are communicated from other systems. If this is the very first calculation (i.e., the plant's t = 0), these variables are set to zero.

Step 1. (Iterations start here). Each controller communicates its stored network output vector z¯i to its affected neighbors; this is stored as network input v¯i by the affected plant. Similarly, pass the cost values stored in each v¯i to the upstream plants; this is stored as ψi by the upstream neighbors. At the system level, this creates the following update for the information:
(25)
(26)

Step 2. Each controller calculates the optimal solution to its cost function uiQP=argmin(Ji) per Eq. (21).

Step 3. Each controller updates the optimal control input as a convex combination of uiQP and the previous value of u¯i. For tuning parameter w[0,1)
(27)
Step 4. Each controller calculates the predicted network outputs z¯c+1 using u¯ic+1. The calculation also uses v¯ic, which is the network input prediction from the previous iteration
(28)
Step 5. Each controller calculates the sensitivity of the local cost functions to the predicted network inputs; this information will be communicated to the upstream neighbors. From Eq. (23)
(29)

Step 6. Compare iteration count to the maximum permitted value cmax. Iterate (repeat steps 1–6) or stop, as appropriate.

Step 7. Each controller applies the final values of u¯i to the plants. In the examples given in this paper, the values of u¯i are setpoints passed to lower level dynamic controllers. Repeat process at the NC-OPT controller sampling rate.

In summary, each local controller calculates its optimal control inputs and the corresponding outputs. It communicates the predicted outputs to its downstream neighbors' controllers for use in their calculations. It also calculates the additional cost imposed by its upstream neighbors, and communicates that information to those upstream controllers. After the process is repeated for a set number of iterations, the resulting control input values are applied to the systems until the next controller sampling time. The number of iterations used and the sampling rate for the controllers are both parameters chosen as part of the control system design.

If the values calculated during this process converge to final values, these values will minimize the centralized cost function given above. Section 4 of the paper discusses the requirements for convergence and shows that the final results are those which minimize the system-level steady state operating cost.

Stability and Convergence of the NC-OPT Algorithm

In the algorithm detailed in Sec. 3.2, three sets of vectors are calculated during the process of finding the local and system-wide optima—control inputs, network outputs, and cost sensitivities—for a given set of user-defined setpoints and exogenous disturbances. As the local controllers iterate to a solution, these vectors—control inputs, network outputs, and cost sensitivities—change and evolve in much the same way as the states of a dynamic system. This section of the paper presents a model of the evolution of the local optima under the NC-OPT algorithm as a linear discrete-time dynamic system. Requirements for stability of this “information dynamics” system, and thus the NC-OPT algorithm itself, are presented. Proof is also given that if the system converges, it will converge to the centralized optimum.

Theorem 1. In the absence of constraints, the NC-OPT algorithm will converge to a stable solution if and only if the eigenvalues of the matrix productΦNzare inside the unit circle.

Proof. Begin with the update laws for the NC-OPT algorithm, as presented earlier:

The convex combination of the current QP solutions with the last value can be expressed as a discrete-time update law
(30)
This is used to calculate a corresponding updated prediction of the network outputs. There will be a time-step delay between this calculation and the recognition of the change by the downstream plants, which allows the following relationship:
(31)
By a similar argument, a discrete-time update law for the sensitivity of the cost function to the network inputs can be expressed
(32)
These update laws can be collected into a single system representation. This system has the (bounded) results of the controllers uQP,NC as the input and disturbance vectors consisting of the user setpoints r and the exogenous disturbances d¯. From Eqs. (25)–(29)
(33)
Since the state matrix of the above system is lower block triangular, its eigenvalues are the same as the eigenvalues of the matrices along its diagonal. In order for the above system to be bounded-input bounded-output stable, therefore, the eigenvalues of the matrix ΦNz must lie within the unit circle of the complex plane. Inside the constrained space the solution to the QP problem u¯iQP can be found by solving dJi/du¯i=0 for u¯i
(34)
where Yi=(My,iTQiMy,i+Si)-1. Since all matrices in Eq. (20) are block diagonal, the solutions can be stacked to obtain the following matrix equation:
(35)
Substitution of Eq. (35) into Eq. (33) changes the state matrix of the information transfer dynamics within the constraint space into
(36)

Once again, the eigenvalues of the state matrix dictate the stability of the above system's equilibrium point. Recall that the scalar term w is used as a tuning parameter for the algorithm; namely, as w approaches 1, the convergence is slower. Examination of the state matrix shows that as w gets closer to 1, the matrix becomes closer to the state matrix in Eq. (33). Thus, as w increases, the eigenvalues can be driven inside the unit circle, ensuring that convergence is possible as long as |λ(ΦNz)|<1.▪

Remark. This can be physically interpreted as a low-gain requirement from upstream disturbances to outputs going downstream (z¯). For cases where this stability condition is not met, the network topology can be redesigned as necessary. For example, two subsystems with a very high gain from one to the other can be combined to create a single subsystem for the purposes of the NC-OPT architecture only, without redesigning lower level controllers.

Convergence to Centralized Optimum.

Theorem 2. If the unconstrained NC-OPT algorithm converges to a stable solution, this solution will minimize the centralized cost function, as defined in Eq. (20).

Proof. Recall from Eq. (20) that the centralized cost function expands to
(37)
Calculating dJC/du¯=0 gives a relationship between the optimal u¯ (designated u¯OPT) and the setpoint r and disturbance d¯ when the optimum is inside the constraint space
(38)
Now examine the cost functions minimized by the NC-OPT controllers. Recall that calculating dJi/du¯i=0 gives the following, where u¯iQP=argmin(Ji)
(39)
Stacking this equation for all subsystems results in
(40)
Note that the structure of this relationship is similar to that found in Eq. (38). The terms are dynamic vectors or states, but upon convergence of the algorithm, these states collapse to the following steady state relationships:
(41)
Substituting Eq. (41) into Eq. (40) above yields
(42)
Rearranging terms
(43)

Equations (38) and (43) are identical, thus the relationship between the optimal u and the setpoint r and disturbance d for the NC-OPT case converges to the same as the centralized case. ▪

Soft Uncoupled Input Constraints.

Theorem 3. Theorems 1 and 2 also hold true in the presence of a soft, uncoupled constraint on the input ui, provided the required conditions for those theorems are met.

Proof. If a soft, uncoupled constraint is applied to each controller, then the local cost functions will have an extra term which is a nonlinear, differentiable function of u¯i
(44)

Define a stacked vector of these scalar functions thus f(u¯)=[f1(u¯1)fp(u¯p)]T

Applying the same treatment as earlier results in a slightly different version of Eq. (38)
(45)

Since the constraints on u are uncoupled, fi/U¯ji=0 and the derivation for the NC-OPT case results in the same cost as the centralized case. ▪

Design, Implementation, and Experiments

A demonstration of the control approach is now given by application to a small-scale experimental test bed system. This multiple evaporator vapor compression system features a variable-speed compressor and three shell-and-tube style evaporators for chilling water. The compressor is capable of providing approximately 3.75 kW of cooling, and each of the evaporators is rated for approximately 1.5 kW individually. Each of these evaporators services a separate tank of water that is subjected to a heat load by one or more immersion-style electric heaters. This arrangement simulates a “split system” variable refrigerant flow air conditioner, where each evaporator serves a room or zone; thus, each of these water tanks will be referred to as a “zone.” A similar application would be a supermarket refrigerator with multiple evaporators. A schematic of the system is given in Fig. 4. The measurements, actuators, and disturbances are indicated on the figure and described in Table 1.

Topology Design for Laboratory Chiller System.

The first step in applying the NC-OPT algorithm is dividing the networked system into individual subsystems. In this application, the subsystems are demarcated by the individual actuators, creating a network of single-input systems. Each of these systems has a single actuator that is dynamically controlled with a proportional-integral (PI) controller. Thus, the u¯i terms that the local controllers are calculating are actually setpoints for these lower level controllers. A block diagram of the overall network topology is shown in Fig. 5. Each of the blocks represents a subsystem that includes a physical plant with a local PI controller, along with an NC-OPT controller. Figures 6–8 will show the content of these blocks in the diagram. The arrows demonstrate how the physical conditions of the plants disturb each other.

In this particular experimental system, each of the individual subsystems are modeled as Type 0 systems. When combined with the PI controller that is typical in industrial applications, a zero steady state error can be assumed from the controller reference setpoint to the regulated output. Therefore, many of the entries in the Nz matrix become zero. This simplifies the interconnection model greatly, since the controllers will reject disturbances from the upstream plants, assuming the actuators do not saturate. However, we emphasize the proposed algorithm is capable of implementation on systems with more complicated topologies, with any arbitrary stabilizing local controllers.

In this application of NC-OPT, the compressor speed is used to control the suction pressure, the electronic expansion valve (EEVs) are used to control evaporator superheats, and the water pumps are used to control the water temperature drop, which is defined here as the temperature drop of the water as it passes through the evaporator. The specific models and local structure are detailed next.

Compressor and Condenser.

The compressor takes low pressure, vaporized refrigerant and increases its pressure and temperature; this high energy vapor is passed through the condenser, which rejects some of the refrigerant's thermal energy to an outside heat sink. The result of this isobaric process is a high pressure, saturated liquid that is fed to the expansion valves.

The first subsystem (plant 1) is the combination of these two components, shown in Fig. 6. The pressure at the compressor inlet, referred to as the suction pressure (Ps), is regulated to a setpoint by a PI controller using compressor speed as the actuating signal. This pressure and the compressor power consumption (W·) are the tracking outputs of the subsystem. Although power consumption is not directly controlled with a lower level controller, setting the desired power consumption to zero induces the optimizing controller to seek minimal power consumption. The suction pressure also acts as a network output, since it affects the behavior of the other subsystems. The network inputs for the subsystems are the cooling capacity (q) of each of the zones. This subsystem does not have any exogenous uncontrolled disturbances in this formulation. A table summarizing the signals used and referencing them to the NC-OPT nomenclature is included in the Appendix (Tables 2 and 3).

Two assumptions are made in constructing this subsystem that simplifies the network topology. First, as discussed earlier, any disturbances to the suction pressure will create zero steady state tracking error due to the PI controller, assuming the compressor speed does not saturate. Additionally, the effect of superheat on power is ignored in this model. While superheat is widely recognized as an important factor in operating efficiency, in the region of maximum efficiency the sensitivity of power consumption to superheat is small [39]. Since these are linearized models, the system origin is set within this region, as is the setpoint range allowed by the NC-OPT controller. The steady state model therefore simplifies to
(46)

Evaporators.

The evaporators in a VCC system function by absorbing thermal energy from a zone into the refrigerant; this heat transfer process is referred to as cooling, and is measured in kilowatts (kW). The high pressure liquid refrigerant entering the evaporator is metered by an expansion valve, which expands the refrigerant to a low pressure, low temperature two-phase fluid. This fluid passes through the evaporator, absorbing thermal energy from the zone via evaporation, and exits the evaporator as a low temperature, low pressure vapor. Assuming complete evaporation, the outlet temperature of the refrigerant will be higher than that at the inlet; the temperature difference is referred to as superheat. A PI loop with an EEV is used in this system to track superheat to a setpoint.

A block diagram typical for subsystems 2–4 is shown in Fig. 7. Evaporator superheat is the tracking output of the subsystem as well as the network output. An EEV with a PI controller performs the tracking task according to a setpoint from the NC-OPT controller, which is the subsystem input. The steady state model is

(47)

Zones.

Subsystems 5–7 are the zones themselves; these plants consist of a tank of water (analogous to a room of air) with a water pump that pumps water through the secondary side of the evaporator heat exchangers. A PI controller uses the pump speed (expressed as a percentage of full speed) to regulate the water temperature drop as it passes through the evaporator. To simulate the heat load experienced by an actual system, immersion heaters are placed in the water tank. Figure 8 gives the block diagram for the subsystems.

The NC-OPT signals are constructed as follows. The water temperature drop across the evaporator (ΔTw) and the zone temperature (Tzone) are the tracking outputs. Note that the zone temperature would be the interface that the occupants would alter to meet their own comfort needs in an actual system, and the temperature drop requirement is generally governed by health and safety concerns (e.g., proper dehumidification in an air-based system). The network inputs are the suction pressure and the superheat in the corresponding evaporator.

Heat load is the disturbance for these plants, and the evaporator cooling rate is the network output. Since these are steady state models, the predicted temperature is unchanging; therefore, disturbance heat load (energy going into the zone) must equal the evaporator cooling rate (energy going out of the zone). This means that the gain of the heat load disturbance to cooling is 1. For the purposes of this experimental demonstration, a Kalman filter compares the actual temperature change over a sampling period to the predicted change for the current cooling rate. The filter then uses this comparison to estimate the size of the heat load disturbance, and feeds that information into the NC-OPT controller. The steady state models used by the controllers are
(48)

Illustration of Convergence—Simulation.

To illustrate the controller convergence, the models and network topology developed for the experimental system in Sec. 5.1 are used to construct an NC-OPT controller network. This is used to display a simulated convergence during one controller time step. In this simulation, a nonzero setpoint is fed to each NC-OPT controller, shown in Fig. 9. The controllers iterate from the origin of the linearized system model to the new optimum. The origin of the system, expressed as the subsystem operating conditions, can be found in the Appendix, along with the controller gains used.

The theory sections of the paper discussed a low-gain requirement for the interconnection, expressed mathematically as eigenvalues of the matrix product ΦNz. For the suite of models developed in this application, Nz is a zero matrix, since there is no gain from the network inputs to the network outputs for any of the local models, as shown in the model Eqs. (46)(48). Thus, the NC-OPT network designed for this application will always converge given sufficient iteration, since the eigenvalues will always be zero and thus within the unit circle of the complex plane.

The abscissa on each graph is the iteration count; the various ordinates are the setpoints fed to the local controllers. While over 100 iterations are needed to fully converge to the centralized optimum, the deviations from the optimal setpoint are very small after 25–50 iterations—typical uncertainty on a temperature measurement, for example, is 0.5 °C. Figure 10 compares the value of the centralized cost function over the iterations to the Pareto minimum; again, the NC-OPT cost gets quite close within 25 iterations. The centralized cost for the decentralized optima is also indicated in the figure for comparison.

These observations suggest that full convergence at each controller sampling instance is probably not necessary for this application. This highlights that if the dynamics of the physical plants are slow relative to the sample time of the controllers, then a low number of iterations (in this case, less than 25) will not necessarily impact the performance of the setpoint calculation. This concept is explored in the first data set that follows.

Setpoint Change—Experimental Comparisons of Iteration Number.

The first two experimental tests show the basic function of the NC-OPT architecture. The same suite of controllers and controllers was used for both tests, with a controller time step of 300 s. In the first case the controllers iterated five times every time step, and in the second case, they iterated 250 times per time step. For both tests, the user-desired zone 1 temperature setpoint was decreased from 22 °C to 20 °C.

Figure 11 compares the setpoints for temperature drop (a) and superheat (b) calculated by the controllers. In the case with 250 iterations, the centralized optimum is achieved in the first pass, while with five iterations per cycle the steady state setpoints are not reached for a long time. The two cases do converge to the same values eventually, making allowances for sensor and disturbance uncertainty. Since the dynamics of the zone itself are driven by the superheat and temperature drop values but are considerably slower than that of the vapor compression system, the higher number of iterations may not be necessary, as is shown in Fig. 12. This figure shows the dynamic response of zone 1 temperature for the two cases. Despite the slower setpoint convergence, there is very little difference between the two dynamic responses of the zone temperature. The iteration number thus becomes a design parameter for setting up the control architecture; even if a large number of iterations are required for convergence to an optimum value, slow plant dynamics may allow for a smaller number of iterations per time step.

Disturbance Rejection.

The next experimental run presented demonstrates the disturbance rejection aspect of the NC-OPT architecture, as well as its ability to tradeoff setpoint tracking criteria. For the test, the heat load disturbance to zone 3 was increased from approximately 0.55 to 0.75 kW; the output of the Kalman filter is shown in Fig. 13. The NC-OPT controllers respond by lowering the temperature drop setpoint as seen in Fig. 14(a), which results in increased water flow from the pump. A slight decrease in the superheat setpoint also results, as shown in Fig. 14(b). These two changes result in more refrigerant flow, and hence more evaporator cooling capacity.

The increased disturbance load means that the system must provide more cooling, and thus consume more power. Since the compressor power consumption is one of the user-defined setpoints, and since a reduction in power consumption implies the zone temperatures will settle at a higher value, there will be a trade-off between tracking the zone temperature setpoints and the power consumption of the compressor. Figure 15 shows how the compressor speed increases to deliver the necessary cooling, as well as the power consumption over the process. Figure 16 shows how the zone temperature increases over the process; the accepted steady state error in the presence of the larger disturbance is larger than the original case. The model-predicted steady state temperature that would result if the setpoints had remained unchanged under the new disturbance is also shown as a reference. Thus, the NC-OPT architecture can be used to balance the comfort demands of the user with the energy and cost reduction desired by building or system operators. This idea is shown further in the next experimental data set.

Energy Savings.

While improved component design has a large effect on system energy consumption, better control of existing control architectures can provide cost savings for a reduced capital investment. This fact is one of the important drivers behind this research.

In this particular implementation of NC-OPT, the compressor power is treated as a setpoint. The previous data set showed that the system seeks to keep power consumption close to a desired setpoint, but balances this with the zone temperature (comfort) needs of the users. In order to cause the system to actually seek to minimize power consumption, the setpoint can be set to zero. While a power consumption of zero is obviously not feasible, this setpoint value turns the setpoint tracking problem into an optimization with respect to power consumption.

Two different startup tests were run, wherein the system is turned on and allowed to come to steady state values. For one of the tests, the system power origin of 0.37 kW is used as a setpoint; for the other, a power setpoint of zero. Figure 17 shows that the controllers settle on a higher suction pressure setpoint for the new case, resulting in a slower compressor speed and thus a reduction of approximately 20% in compressor power consumption.

The resulting zone temperature results are given in Fig. 18. Zones 1 and 2 have large error weights (see Appendix), so the resulting change in zone temperature is small. However, since the error weight on zone three is much smaller compared to the other evaporators, tracking performance for this zone temperature is significantly degraded. This displays the flexibility of the approach—different weights can be assigned to different zones, permitting a degree of load shedding by the system if necessary to achieve power consumption reduction.

Conclusion

The NC-OPT algorithm presented herein provides convergence to a centralized or “Pareto” optimum using an iterative approach. This approach relies on communication only between “neighbor” subsystems, and subsystems require no knowledge of other systems' dynamics or cost functions. An interconnection matrix guides the communications between controllers. This is a key departure from other recent DMPC approaches that require that all subsystems communicate with each other, contain models of other subsystem dynamics, and use a common system-level cost function to achieve Pareto optimality. An important implication of this is a high level of modularity: An alteration to a particular subsystem does not require any changes to neighboring controllers. The ability to use existing local level controllers along with the distributed optimization is one of the particular strengths of the NC-OPT architecture, since VCC systems are generally controlled with a suite of local level single-input, single-output controllers. Additionally, the construction of the controllers is intuitive and builds upon basic controls knowledge, which is an important practical aspect for inclusion in building systems.

Another feature is that this approach allows for the iteration and communication between neighboring subsystems to be decoupled from the implementation of the control action. In essence, the discrete-time dynamics describing the NC-OPT algorithm can evolve separately from (or simultaneously with) the dynamics of the controlled system. Note that the communication requirements are relatively low: Only two vectors are communicated to each system, the predicted inputs and the penalty term. In a building system this communication could be handled through the centralized building monitoring system, or directly between neighboring subsystems in other applications without such a monitoring system.

While the calculation of setpoints alone is useful for many applications where the controller is coupled with existing dynamic controllers, the extension of the approach to calculate the optimum dynamic path of the system is an important future direction for this work. Additional developments and designs for the architecture for specific systems would also be useful, such as the implementation of coupled constraints. Development of the interconnection matrix for large, complicated systems with hundreds of components remains an open challenge; judicious use of adaptive algorithms might prove a fruitful solution to this problem. From a purely theoretical perspective, further development of the algorithm to include stability proofs for hard constraints and nonlinear cost functions and models would be a welcome development.

Acknowledgment

This material is based upon work supported by the National Science Foundation under Grant No. CMMI-0644363, and the U.S. Department of Commerce under Award No. 60NANB12D209.

Appendix

References

1.
U. S. Dept. of. Energy, 2012, Annual Energy Review 2011.
2.
U.S. Dept. of Energy Information Administration, 2009, Emissions of Greenhouse Gases in the United States 2008.
3.
Brown
,
R. E.
, and
Koomey
,
J. G.
,
2003
, “
Electricity Use in California: Past Trends and Present Usage Patterns
,”
Energy Policy
,
31
(
9
), pp.
849
864
.10.1016/S0301-4215(02)00129-5
4.
Camacho
,
E. F.
, and
Bordons
,
C.
,
2004
, “
Model Predictive Control
,”
Advanced Textbooks in Control and Signal Processing
,
Springer
,
London/New York
.
5.
Mayne
,
D.
,
Rawlings
,
J.
,
Rao
,
C.
, and
Scokaert
,
P.
,
2000
, “
Constrained Model Predictive Control: Stability and Optimality
,”
Automatica
,
36
(
6
), pp.
789
814
.10.1016/S0005-1098(99)00214-9
6.
Qin
,
S.
, and
Badgwell
,
T.
,
2003
, “
A Survey of Model Predictive Control Technology
,”
Control Eng. Pract.
,
11
(
7
), pp.
733
764
.10.1016/S0967-0661(02)00186-7
7.
Garcia
,
C. E.
,
Prett
,
D. M.
, and
Moriari
,
M.
,
1989
, “
Model Predictive Control: Theory and Practice—A Survey
,”
Automatica
,
25
(
3
), pp.
335
348
.10.1016/0005-1098(89)90002-2
8.
Leducq
,
D.
,
Guilpart
,
J.
, and
Trystam
,
G.
,
2006
, “
Non-Linear Predictive Control of a Vapour Compression Cycle
,”
Int. J. Refrigeration
,
29
(
5
), pp.
761
772
.10.1016/j.ijrefrig.2005.12.005
9.
Elliott
,
M. S.
, and
Rasmussen
,
B. P.
,
2008
, “Model-Based Predictive Control of a Multi-Evaporator Vapor Compression Cooling Cycle,”
American Control Conference
, Seattle, WA, June 11–13.10.1109/ACC.2008.4586698
10.
Elliott
,
M. S.
, and
Rasmussen
,
B. P.
,
2009
, “A Model-Based Predictive Supervisory Controller for Multi-Evaporator HVAC Systems,”
American Control Conference
, St. Louis, MO, June 10–12.10.1109/ACC.2009.5160498
11.
Huang
,
G.
,
Wang
,
S.
, and
Xu
,
X.
,
2009
, “
A Robust Model Predictive Control Strategy for Improving the Control Performance of Air-Conditioning Systems
,”
Energy Convers. Manage.
,
50
(
10
), pp.
2650
2658
.10.1016/j.enconman.2009.06.014
12.
Xi
,
X.-C.
,
Poo
,
A.-N.
, and
Chou
,
S.-K.
,
2007
, “
Support Vector Regression Model Predictive Control on a HVAC Plant
,”
Control Eng. Pract.
,
15
(
8
), pp.
897
908
.10.1016/j.conengprac.2006.10.010
13.
Xu
,
M.
,
Li
,
S.
, and
Cai
,
W.
,
2005
, “
Practical Receding-Horizon Optimization Control of the Air Handling Unit in Hvac Systems
,”
Ind. Chem. Eng. Res.
,
44
(
8
), pp.
2848
2855
.10.1021/ie0499411
14.
Muske
,
K. R.
, and
Badgwell
,
T. A.
,
2002
, “
Disturbance Modeling for Offset-Free Linear Model Predictive Control
,”
J. Process Control
,
12
(
5
), pp.
617
632
.10.1016/S0959-1524(01)00051-8
15.
Jia
,
D.
, and
Krogh
,
B. H.
,
2001
, “Distributed Model Predictive Control,” 2001
American Control Conference
, Arlington, VA, June 25–27.10.1109/ACC.2001.946306
16.
Venkat
,
A. N.
,
Rawlings
,
J.
, and
Wright
,
S. J.
,
2005
, “Stability and Optimality of Distributed Model Predictive Control,”
44th IEEE Conference on Decision and Control
, Seville, Spain, Dec. 12–15.10.1109/CDC.2005.1583235
17.
Başar
,
T.
, and
Olsder
,
G. J.
,
1999
,
Dynamic Noncooperative Game Theory, Classics in Applied Mathematics
,
SIAM
, Philadelphia, PA.
18.
Venkat
,
A. N.
,
Rawlings
,
J.
, and
Wright
,
S. J.
,
2007
, “
Distributed Model Predictive Control of Large-Scale Systems
,”
Assessment and Future Directions of Nonlinear Model Predictive Control
,
Springer
,
Berlin
.
19.
Stewart
,
B. T.
,
Venkat
,
A. N.
,
Rawlings
,
J.
,
Wright
,
S. J.
, and
Pannocchia
,
G.
,
2010
, “
Cooperative Distributed Model Predictive Control
,”
Syst. Control Lett.
,
59
(
8
), pp.
460
469
.10.1016/j.sysconle.2010.06.005
20.
Rawlings
,
J.
, and
Mayne
,
D.
,
2009
,
Model Predictive Control: Theory and Design
,
Nob Hill, Madison
,
WI
.
21.
Stewart
,
B. T.
,
Wright
,
S. J.
, and
Rawlings
,
J. B.
,
2011
, “
Cooperative Distributed Model Predictive Control for Nonlinear Systems
,”
J. Process Control
,
21
(
5
), pp.
698
704
.10.1016/j.jprocont.2010.11.004
22.
Maestre
,
J. M.
,
Muñoz De La Peña
,
D.
, and
Camacho
,
E. F.
,
2011
, “
Distributed Model Predictive Control Based on a Cooperative Game
,”
Opt. Control Appl. Methods
,
32
(
2
), pp.
153
176
.10.1002/oca.940
23.
Maestre
,
J. M.
,
Muñoz De La Peña
,
D.
,
Camacho
,
E. F.
, and
Alamo
,
T.
,
2011
, “
Distributed Model Predictive Control Based on Agent Negotiation
,”
J. Process Control
,
21
(
5
), pp.
685
697
.10.1016/j.jprocont.2010.12.006
24.
Ferramosca
,
A.
,
Limon
,
D.
,
Alvarado
,
I.
, and
Camacho
,
E. F.
,
2013
, “
Cooperative Distributed MPC for Tracking
,”
Automatica
,
49
(
4
), pp.
906
914
.10.1016/j.automatica.2013.01.019
25.
Cheng
,
R.
,
Fraser Forbes
,
J.
, and
Yip
,
W. S.
,
2008
, “
Dantzig–Wolfe Decomposition and Plant-Wide MPC Coordination
,”
Comput. Chem. Eng.
,
32
(
7
), pp.
1507
1522
.10.1016/j.compchemeng.2007.07.003
26.
Camponogara
,
E.
, and
De Oliveira
,
L. B.
,
2009
, “
Distributed Optimization for Model Predictive Control of Linear-Dynamic Networks
,”
IEEE Trans. Syst, Man, Cybern., Part A
,
39
(
6
), pp.
1331
1338
.10.1109/TSMCA.2009.2025507
27.
Ocampo-Martínez
,
C.
,
Bovo
,
S.
, and
Puig
,
V.
,
2011
, “
Partitioning Approach Oriented to the Decentralised Predictive Control of Large-Scale Systems
,”
J. Process Control
,
21
(
5
), pp.
775
786
.10.1016/j.jprocont.2010.12.005
28.
Alessio
,
A.
,
Barcelli
,
D.
, and
Bemporad
,
A.
,
2011
, “
Decentralized Model Predictive Control of Dynamically Coupled Linear Systems
,”
J. Process Control
,
21
(
5
), pp.
705
714
.10.1016/j.jprocont.2010.11.003
29.
Necoara
,
I.
,
Nedelcu
,
V.
, and
Dumitrache
,
I.
,
2011
, “
Parallel and Distributed Optimization Methods for Estimation and Control in Networks
,”
J. Process Control
,
21
(
5
), pp.
756
766
.10.1016/j.jprocont.2010.12.010
30.
Christofides
,
P. D.
,
Scattolini
,
R.
,
De La Peña
,
D. M.
, and
Liu
,
J.
,
2013
, “
Distributed Model Predictive Control: A Tutorial Review and Future Research Directions
,”
Comput. Chem. Eng.
,
51
, pp.
21
41
.10.1016/j.compchemeng.2012.05.011
31.
Ma
,
Y.
,
Anderson
,
G.
, and
Borrelli
,
F.
,
2011
, “A Distributed Predictive Control Approach to Building Regulation,” 2011
American Controls Conference
, San Francisco, CA, Jun. 29–Jul. 1.
32.
Morosan
,
P.-D.
,
Bourdais
,
R.
,
Dumur
,
D.
, and
Buisson
,
J.
,
2011
, “
A Distributed MPC Strategy Based on Benders' Decomposition Applied to Multi-Source Multi-Zone Temperature Regulation
,”
J. Process Control
,
21
(
5
), pp.
729
737
.10.1016/j.jprocont.2010.12.002
33.
Morosan
,
P.-D.
,
Bourdais
,
R.
,
Dumur
,
D.
, and
Buisson
,
J.
,
2010
, “
Building Temperature Regulation Using a Distributed Model Predictive Control
,”
Energy Build.
,
42
(
9
), pp.
1445
1452
.10.1016/j.enbuild.2010.03.014
34.
Bourdais
,
R.
, and
Guéguen
,
H.
,
2010
, “
Distributed Predictive Control for Complex Hybrid System. The Refrigeration System Example
,” 12th IFAC Symposium on Large Scale Systems, Rennes, France.
35.
Zhang
,
Y.
, and
Li
,
S.
,
2007
, “
Networked Model Predictive Control Based on Neighbourhood Optimization for Serially Connected Large-Scale Processes
,”
J. Process Control
,
17
(
1
), pp.
37
50
.10.1016/j.jprocont.2006.08.009
36.
Farina
,
M.
, and
Scattolini
,
R.
,
2012
, “
Distributed Predictive Control: A Non-Cooperative Algorithm With Neighbor-to-Neighbor Communication for Linear Systems
,”
Automatica
,
48
(
6
), pp.
1088
1096
.10.1016/j.automatica.2012.03.020
37.
Scheu
,
H.
, and
Marquardt
,
W.
,
2011
, “
Sensitivity-Based Coordination in Distributed Model Predictive Control
,”
J. Process Control
,
21
(
5
), pp.
715
728
.10.1016/j.jprocont.2011.01.013
38.
Elliott
,
M. S.
, and
Rasmussen
,
B. P.
,
2012
, “Neighbor-Communication Model Predictive Control and HVAC Systems,” American Control Conference, Montreal, Canada, June 27–29.
39.
Elliott
,
M. S.
,
Jones
,
E.
, and
Rasmussen
,
B. P.
, “
A Study of Evaporator Superheat Dynamics, Efficiency Degradation, and Control
,”
Int. J. Refrigeration
(in press).