Abstract
Cyber–physical–social systems (CPSS) with highly integrated functions of sensing, actuation, computation, and communication are becoming the mainstream consumer and commercial products. The performance of CPSS heavily relies on the information sharing between devices. Given the extensive data collection and sharing, security and privacy are of major concerns. Thus, one major challenge of designing those CPSS is how to incorporate the perception of trust in product and systems design. Recently, a trust quantification method was proposed to measure the trustworthiness of CPSS by quantitative metrics of ability, benevolence, and integrity. The CPSS network architecture can be optimized by choosing a subnet such that the trust metrics are maximized. The combinatorial network optimization problem, however, is computationally challenging. Most of the available global optimization algorithms for solving such problems are heuristic methods. In this paper, a surrogate-based discrete Bayesian optimization method is developed to perform network design, where the most trustworthy CPSS network with respect to a reference node is formed to collaborate and share information with. The applications of ability and benevolence metrics in design optimization of CPSS architecture are demonstrated.
1 Introduction
Cyber–physical systems (CPS) are physical devices that have highly integrated functions of sensing, actuation, computation, and communication. Currently, both consumer and commercial products are becoming more intelligent with the implementations of them as CPS. These CPS devices have embedded sensors and can collect data of the surrounding environment. The data are shared between those devices, which help human users as well as the intelligent devices to make individual decisions. The decisions can be further executed with the actuation units of the devices. The CPS devices are the essential elements for smart home, smart city, intelligent manufacturing, personalized medicine, autonomous and safe transportation, omnipresent energy supplies, and many other applications. When CPS interact with human users and are integrated with human society, they are also termed as cyber–physical–social systems (CPSS), where the social dimension of the systems needs to be considered.
The design of CPSS is challenging because various factors and constraints in the cyber, physical, and social dimensions of design space need to be considered. There are unique challenges in CPSS design, such as sustainability, reliability, resilience, interoperability, adaptability, bio-compatibility, flexibility, and safety in the physical subspace. There are also principles of human-in-the-loop, data-driven design, co-design, scalability, usability, and security that need to be considered in the cyber subspace. In social subspace, the perceptions of risk, trust, and privacy, as well as memory capacity and emotion of users need to be incorporated.
The rapid growth of CPSS requires engineers to adopt a new design for connectivity principle. Different from traditional products, CPSS devices heavily rely on information sharing with each other to be functioning. A standalone CPSS device that is disconnected from networks cannot perform the functions which it is designed for. Thus, network connectivity is essential for CPSS. Those devices form the Internet of Things (IoT). How to consider the connectivity related issues in product design, therefore, is new to engineers. Particularly, each CPSS device constantly collects data and shares them with other devices in the networks. Information security and privacy become critical issues in designing such networked systems. At the high-level application layer, decisions of what data can be collected, where data are stored, who can access the data, which portion of data can be shared, etc. need to be made during the software design. These design decisions will simultaneously affect hardware and mechanism design. The effectiveness of CPSS functionalities critically depends on what and how information is shared between each other. Therefore, trust is an important design feature for these systems to work together. Designing the decision-making units on CPSS or decision support for human users need to incorporate the social dimension of trust.
Furthermore, how to design trustworthy CPSS that human users are willing to adopt and use is critical, as personal information are likely to be collected and shared by the devices. The users’ trust perceptions about a system may vary and can affect the effectiveness of human–device interactions. Thus, the social dimension of trust is an important factor for design engineers to consider.
Trust has been extensively studied in the domains of psychology, organizational behavior, marketing, and computer science. However, most studies remain conceptual and qualitative. Quantitative measurements of trustworthiness are needed when the concept is applied in engineering design and optimization. Some quantitative studies of trust have been conducted in computer science, where trustworthiness is mostly quantified by quality of service (QoS), e.g., success rate as well as consistency in packet forwarding and other transactions, in network communication. The reputations in user ratings and recommendations online were also used. These metrics are quantities only in cyber design space. There is still lack of trustworthiness metrics in both cyber and social design spaces, which are important to guide the design of trustworthy CPSS at the levels of network architecture and devices.
In this work, the perception of trust is quantified and applied in the CPSS architecture design, where a node’s collaboration network can be obtained by maximizing the level of trustworthiness. The quantitative trustworthiness metrics are based on the recently proposed ability-benevolence-integrity (A-B-I) model [1–3], where trustworthiness is quantified by the cyber–social metrics of ability, benevolence, and integrity. Ability shows how well a trustee party is capable of doing what it claims to perform. Benevolence indicates whether the motivation of the trustee is purely for the benefit of itself. Integrity measures if the trustee does what it claims to. Based on a mesoscale probabilistic graph model [4,5] of CPSS, the perceptions of ability, benevolence, and integrity can be quantified with the probabilities of good judgements for the nodes as well as the information dependencies among nodes.
In this paper, we further demonstrate how to apply the quantitative trustworthy metrics as the design criteria in network architecture design and optimization. The metrics of ability and benevolence are used as the utilities to identify an optimal subset of nodes in the network that a node can trust and collaborate with. A new discrete Bayesian optimization (DBO) method is proposed to solve the combinatorial network optimization problem. Bayesian optimization is a surrogate-based global optimization scheme that incorporates uncertainty in the searching process. The proposed discrete optimization method employs Gaussian process surrogates with a new discrete kernel function in searching the best combinations of nodes. The new discrete kernel is developed to better measure the similarity between networks with respect to the objective function.
Different from other global optimization approaches such as the commonly used genetic algorithms, simulated annealing, and other “memoryless” heuristic algorithms, Bayesian optimization keeps the search history. In addition, an acquisition function is constructed and used to guide the searching or sequential sampling process. It is designed to strike a balance between exploration and exploitation. During sequential sampling, the surrogate of the objective function is continuously updated based on the Bayesian belief update when new samples are available. Therefore, the searching process in Bayesian optimization can be accelerated with the properly designed surrogate model and acquisition function. This provides unique advantages in discrete optimization over traditional heuristic algorithms, especially for complex combinatorial problems where exhaustive search in the discrete solution space is computationally prohibitive.
In the remainder of this paper, the existing work of system-level design of CPSS, discrete Bayesian optimization, and trust quantification approaches are reviewed in Sec. 2, where the probabilistic graph model of CPSS is also introduced. In Sec. 3, the metrics of ability and benevolence in the A-B-I trust model are introduced. The discrete Bayesian optimization method is described in Sec. 4. The application of Bayesian optimization to the CPSS network architecture design is demonstrated with the ability and benevolence metrics.
2 Background
Here, an overview of CPSS system-level design is given. The existing research on discrete Bayesian optimization and trust quantification are reviewed. The probabilistic graph model of CPSS which the A-B-I model is based upon is also introduced.
2.1 Systems-Level Design of CPSS.
Compared with traditional products, the design of CPSS requires engineers to have better understanding of the systems-level behaviors [6], from conceptual design to design optimization of multidisciplinary and hierarchical architecture [7]. Given the evolutionary nature of cyber and physical technologies, adaptability that enables self-learning, self-organization, and context awareness is important [8]. As the complexity of the CPSS networks grows, the emphasis of large networks should be more on resilience (the ability to recover) than reliability (the ability to stay functioning) [4,5].
Some systems modeling methods and tools have been applied for CPSS design and analysis, such as hybrid discrete-event and continuous simulations [9–11], inductive constraint logic programming [12], abductive reasoning [13], hybrid timed automaton [14], ontologies [15], information schema [16], UML [17], SysML [18], and information dynamics modeling [19]. The high-dimensional design space of CPSS includes not only the cyber and physical subspaces but also the social subspace. The modalities for human–system interaction [20], context awareness and personalized human–system communication [21], as well as trusted collaboration [1–3] have been studied.
To support systems design, developing optimization methods for the large-scale network at the metasystem level is necessary. Network optimization usually involves combinatorial problems. Here, we propose to use Bayesian optimization to solve these problems.
2.2 Bayesian Optimization for Discrete Problems.
Bayesian optimization is a class of surrogate-based methods to search global optimum under uncertainty with Bayesian sequential sampling strategies. The search or sampling process is based on an acquisition function that is defined in the same input space of the objective function. In parallel, a surrogate model of the objective is also constructed and updated during the search. The most used surrogate is the Gaussian process regression (GPR) model which is updated based on the Bayesian principle. The surrogate keeps the search history since it is constructed from the samples. At the same time, it helps decide the next sample in the sequential sampling. Therefore, if the surrogate model is designed properly, surrogate-based optimization methods can be more efficient than other “memoryless” searching methods. Bayesian optimization has been widely used in the continuous domain and only recently gained attentions in the discrete domains. Here, the review is focused on its use to solve discrete problems.
For mixed-integer problems, Tran et al. [22] proposed a Gaussian mixture approach to combine a discrete number of design subspaces for continuous variables. Each subspace contains a GPR surrogate model, and the global one is the Gaussian mixture model. Iyer et al. [23] mapped the discrete variables to a continuous latent space so that the mixed-integer problem is converted to continuous problem.
For discrete problems, the straightforward extension is just treating discrete variables as continuous ones and round the variable values to the closest integers during the searching process. Baptista and Poloczek [24] proposed a quadratic acquisition function for combinatorial problems and converted the binary variables to high-dimensional vectors during the searching process. The solutions are then projected back to the binary space. However, this approach may fail to identify the true optimum and be trapped in the local region because there is a mismatch between the true discontinuous objective function and the assumed continuous acquisition function. Zaefferer et al. [25] replaced the continuous distance with discrete distance measures and compared the performance using the expected improvement (EI) acquisition function. Garrido-Merchán and Hernández-Lobato [26] developed an input variable transformation to ensure the distance between any two discrete variables remain unchanged in evaluating kernels when the variables perturb into the continuous space. Zhang et al. [27] proposed a new kernel function based on the Hamming distance for permutation problems and the prior knowledge about similarity in the problems. The sparse Gaussian process model was used to reduce the computational cost of kernel update. Oh et al. [28] represented the discrete solutions of the combinatorial problems as combinatorial graphs and the adjacency information is embedded in the kernel function.
The major research question for discrete Bayesian optimization is how to design discrete kernels so that the differences between samples in the discrete space, which are problem-specific, can be quantitatively reflected in the distance measure. There is still a lack of thorough comprehension.
2.3 Trust Quantification for Cyber–Physical Systems.
Conceptually, trust is the willingness to be vulnerable to another. It is a different concept from security. Security is critical for trust. However, security alone cannot guarantee the trustworthiness. For instance, although security protocols can ensure data are not intercepted during transmission, they provide no guarantee against the misuse by the receiving party or against fraud by the transmitting party. In recent studies in cyberspace, trust was quantified with reputation, ratings, and user recommendations in information systems and social networks [22,29]. It was also measured by QoS, routing and delivery success rates, and consistency of data forwarding in computer networks and sensor networks [30,31]. Approaches of probability [32–34], imprecise probability [35,36], and fuzzy logic [37–39] have been developed to quantify the human perception of trust. It should be noted that trust in social space and its dynamics need to be taken into consideration [40,41].
To quantify the trustworthiness of CPS, Chen et al. [42] developed a fuzzy model of trust based on the reputation of communication efficiency. Huang et al. [43] represented trust as probabilistic measures of trustor’s belief and trustee’s performance. Al-Hamadi and Chen [44] calculated trust from user ratings aggregated from different time periods and different locations. Yu et al. [45] quantify trustworthiness as a weighted average of reliability, availability, and security. Xu et al. [46] used the weighted average of direct user experiences and other’s recommendations to evaluate the trust of edge computing devices. Tang et al. [47] measured the sensor data trustworthiness in sensor networks based on sensor-object distances, whereas Tao et al. [48] used the consistency with reference datasets. Xu et al. [46] quantified the trustworthiness of CPS nodes by a combination of QoS and reputation, whereas Junejo et al. [49] used QoS measurements and Xia et al. [50] used reputation.
Different from the above, Wang [1–3] developed a quantitative A-B-I model with multi-faceted metrics of ability, benevolence, and integrity. The considerations of these three factors are broader than those in the above approaches. These factors have been qualitatively investigated in the studies of social organizations. As comprehensively studied by Mayer et al. [51], the common concepts and keywords to describe trust in human society can be grouped into these three categories. For instance, the ability category includes expertise, competence, and the similar. The benevolence category includes loyalty, openness, receptivity, and availability. Integrity is associated with consistency, discreetness, fairness, promise fulfillment, and reliability. The three trust factors have also been adopted in designing trustable information systems such as e-commerce [52,53], e-banking [54], and mobile health [55]. In the quantitative A-B-I model [1–3] for CPS networks, metrics of ability, benevolence, and integrity are developed based on measurable quantities. Ability characterizes a node’s capabilities of sensing, reasoning, and influence to other nodes. Benevolence characterizes the motivation of a node for its information sharing. Integrity is related to the traditional cyber and physical security and can be quantified from QoS. These A-B-I metrics can be quantitatively measured, calculated, and compared. For instance, Wang et al. [56] applied the quantitative A-B-I model to evaluate the trustworthiness of IoT nodes with data collection and communication behaviors.
In order to build large-scale networks, trustworthiness should be treated as transferrable quantities so that it can be propagated in scalable systems. With the quantitative measures of trustworthiness, the risk of deploying CPS can be quantified and assessed more thoroughly in highly complex networks where a global view of the networks is difficult to obtain. Trust quantification in this work is based on a probabilistic graph model of CPSS, as introduced in Sec. 2.4.
2.4 Probabilistic Graph Model of CPSS.
The probabilistic graph model [2,5] is an abstraction of CPSS networks at the mesoscale. It captures the sensing, computing, and communication capabilities of CPSS by the prediction probabilities for all nodes in a CPSS network and the pair-wise reliance probabilities between nodes as the extent of information dependency and mutual influences. The model is illustrated in Fig. 1. The prediction and reliance probabilities of nodes are defined as follows.
The state variables contain the results from sensing. The values can be updated from computing or reasoning. Therefore, the prediction probabilities capture the sensing and computing functionalities, whereas the reliance probabilities indicate the functionality of communication. The random state variables with binary values can be extended to multiple values or continuous. For instance, one sensor measures a value which follows some distribution, as in prediction probability. If there are a finite set of possible values {θ1, …, θT} for state variables. The prediction probability P(xk = θn) and reliance probability P(xj = θn|xi = θm), where 1 ≤ m, n ≤ T, can be enumerated similarly.
The edges in the probabilistic graph are directional. The neighbors of each node can be further differentiated as source nodes or destination nodes, as illustrated in Fig. 2. For one node, its source nodes are those sending information to this node, whereas the destination nodes are those receiving information from it. When receiving different cues from source nodes, a CPSS node can update its prediction probability to reflect its perception of the world. The aggregation of prediction probabilities sensitively depends on the rules of information fusion during the prediction update.
The probabilistic graph model provides a mesoscale description of CPSS networks, where information exchange and aggregation are captured. Prediction and reliance probabilities can be easily obtained in a physical system from the collected historical data. The prediction probability of a node can be based on the data collected by its sensing and reasoning units. The probability can be estimated from the frequencies of observing correct state variable values under uncertainty or sharing correct observations. Similarly, the reliance probability associated with an edge can be estimated from the frequencies of positive or negative predictions by the destination node given the source node’s own prediction. For instance, in a sensor network or industrial ethernet, if the prediction probability of a sensor is used to quantify its sensitivity, the probability can be estimated as the ratio of the number of observations per time unit sent by this node to a baseline reference number that the best performer in the local network sends. The known best performer sets an upper limit. The reliance probability for each edge of the sensor network can be estimated as the ratio of the number of packets received by the destination to the number sent by the source, or the ratio of correct observations, as a measure of communication reliability [5].
If no experimental data are available to quantify the probabilities, subjective estimations from domain experts can be elicited. Probability elicitation is well known in both practice and literature. Standard procedures are usually taken to elicit probabilities associated with some events from domain experts as subjective estimates.
3 The Ability-Benevolence-Integrity Trust Model
Based on the probabilistic graph model, the trust metrics of ability and benevolence in the A-B-I model [1–3] can be calculated. The quantitative metrics in the A-B-I model are summarized in Fig. 3. The trust level is quantified by three orthogonal metrics of ability, benevolence, and integrity. The ability of a CPSS node is measured with its capability of performing correct predictions and capability of information processing for decision making from the perspectives of sensing and computation, as well as its influence to other nodes. The benevolence is measured by reciprocity as the willingness to share information reciprocally and motive as the motivation of sharing from the perspective of communication. The integrity of a CPSS node is closely related to the cybersecurity and can be evaluated with consistency, frequency of compromises, QoS, and other security measurements.
Here, only the metrics of ability and benevolence are summarized. They will be used as the utilities to demonstrate the network optimization. Since integrity has been studied extensively in cybersecurity, ability and benevolence can show the uniqueness of our proposed trust measurements. The complete description of the A-B-I trust model as well as the illustrations of the metrics and their use for detecting malicious attacks can be found in Ref. [2].
3.1 Ability.
The ability of a CPSS node is evaluated by its capabilities of prediction and information processing as well as its influence to other nodes. The capability of prediction for a node is measured by its functionality of data collection. The capability of information processing is by its functionality of reasoning based on data obtained from its neighbors. The influence to others is quantified by how influential its information shared to others is in their decision making. Those quantities can be quantified by the prediction probability and reliance probabilities perceived by others, as well as the precisions of the perceptions.
Based on the directions of information sharing between nodes, the neighboring nodes for each node in the network are categorized as source nodes and destination nodes, as illustrated in Fig. 2. With respect to node j, the set of source nodes that share information with node j is denoted as , and the set of destination nodes that receive information from node j is denoted as .
The perceptions about the P- and Q-reliance probabilities for nodes i and j are related to the information processing capability of node j. A high P-reliance probability indicates that node j can absorb knowledge quickly. A high Q-reliance probability shows that node j can have good judgement even in a noisy and uncertain situation. We simplify the notations as Lij = ℙ(P(xj = θ|xi = θ)) and , respectively. They are assumed to follow Gaussian distributions with means (Lij|Aj) = pij and , and variances and , respectively.
Therefore, a node that gives accurate predictions, makes sound decisions, and brings positive influences to others is deemed to be trustworthy.
Higher-order perceptions of ability can be similarly defined.
3.2 Benevolence.
The benevolence of a CPSS node is evaluated by the reciprocity and motive. The perception of reciprocity is measured by the willingness of sharing information to others while receiving information simultaneously. The motive is quantified by the quality of information shared to others and the frequency of sharing.
4 Discrete Bayesian Optimization
The trust-based network optimization is to identify a subset of nodes in the network which are the most trustworthy with respect to a reference node. The optimization problem involves choosing the best subset of nodes and, therefore, is combinatorically complex. The traditional approach to solve these problems is using heuristic algorithms such as genetic algorithms and simulated annealing.
Here, a new dBO method is developed to perform the CPSS network optimization. The design problem is to choose the optimum subgraph out of a graph with respect to a reference node such that the trustworthiness level perceived by the reference node is maximized.
5 Trust-Based Strategic Network Design
A strategic network for a node is the most trustworthy network that the node can form the strategic collaboration relation. The design of such strategic network is to identify a subset of nodes within the complete network so that the node has the highest trustworthiness level. The trustworthiness metrics of ability and benevolence are used here to demonstrate the trust-based strategic network design. The network optimization based on other metrics such as integrity can be done similarly.
5.1 Ability as the Optimization Criteria.
Ability in Eq. (13) is first utilized as the metric to identify the most trustworthy network for a reference node. The strategic network of the reference node can be obtained by finding the network where the ability of the reference node is maximized. Three networks with 20, 40, and 60 nodes, shown in Fig. 4, are generated with random connections for tests. The prediction and reliance probabilities are also randomly generated. Note that the random networks are generated to better test the robustness and scalability of the design optimization method than some deterministic ones.
The EI acquisition in Eq. (23) and UCB acquisition in Eq. (24) along with the two kernel functions in Eqs. (25) and (26) are tested for the 20-node-192-edge example. The Hamming distance is used in the kernels. When searching for the optimum network to maximize the ability of node 0, they have different convergence rates, as compared in Fig. 5(a). The optimum solution, as shown in Fig. 5(b), is found with the EI acquisition in combination with the multi-parameter kernel. During the search, a simulated annealing algorithm is applied to maximize the acquisition to decide the next sample. It is seen that the search can be trapped at the local optimum when the single-parameter kernel function in Eq. (26) is used. The single-parameter kernel function does not provide as much granularity as the multi-parameter kernel and does not differentiate much about the different contributions between nodes for the ability of node 0. Therefore, the parameter training tends to be not optimal. The UCB acquisition function emphasizes more on exploitation than the EI acquisition. Thus, the search tends to get trapped in local optima.
The convergence speeds for the networks of different sizes are further tested. The results are shown in Fig. 6. It is seen that as the size of network increases, more iterations are required to find the global optimum. The reason is two-fold. First, larger networks result in the higher dimension of the searching space. The searching complexity for the possible solutions grows exponentially. Second, as the dimension of searching space increases, more samples are required to construct reliable surrogate models. Therefore, more iterations are necessary to ensure the convergence to the global optimum.
To compare the performance of the dBO method with the commonly used heuristic algorithms, simulated annealing is applied for the same network optimization problems. For each of the three examples with 20, 40, and 60 nodes, the simulated annealing algorithm to maximize the ability metric is run 5 times with different annealing steps ranging from 50 to 300. The means and standard deviations of the obtained optimal ability values for those test runs are listed in Tables 1–3, respectively. The means and standard deviations of results for 5 runs of the dBO algorithm after 50 iterations are also listed in these tables, where EI acquisition and multi-parameter kernel are used. The number of annealing steps indicates the computational cost where each step involves one evaluation of the original objective function. In the dBO searching, 50 initial samples with the evaluations of the objective function were obtained to construct the initial GPR surrogate. Additional samples are added for each of the iterations in Figs. 5 and 6. Each iteration involves one evaluation of the objective function, whereas the evaluation of the acquisition function in Bayesian optimization is based on the surrogate and usually costs much less, especially when the original objective function requires heavy computation. Therefore, the cost of dBO for 50 iterations is approximately equivalent to the cost of simulated annealing for 100 steps in these examples. From the comparisons, it is seen that the dBO method can find better solutions than the simulated annealing with the similar cost. Furthermore, the results of the dBO method have much less variability. In other words, the dBO algorithm is also more robust than the heuristic simulated annealing.
Steps | Mean | Standard deviation |
---|---|---|
50 | 0.704128758 | 0.024803099 |
100 | 0.717732062 | 0.01618725 |
150 | 0.724677974 | 0.021446642 |
200 | 0.738149753 | 0.026914332 |
250 | 0.72842703 | 0.018894042 |
300 | 0.726842286 | 0.014625707 |
dBO | 0.763904996 | 0.002614458 |
Steps | Mean | Standard deviation |
---|---|---|
50 | 0.704128758 | 0.024803099 |
100 | 0.717732062 | 0.01618725 |
150 | 0.724677974 | 0.021446642 |
200 | 0.738149753 | 0.026914332 |
250 | 0.72842703 | 0.018894042 |
300 | 0.726842286 | 0.014625707 |
dBO | 0.763904996 | 0.002614458 |
Steps | Mean | Standard deviation |
---|---|---|
50 | 0.638595221 | 0.060644109 |
100 | 0.684115767 | 0.035342407 |
150 | 0.696934409 | 0.028088683 |
200 | 0.68054112 | 0.023215712 |
250 | 0.709194429 | 0.031983543 |
300 | 0.70440341 | 0.023225232 |
dBO | 0.746661792 | 0.00340882 |
Steps | Mean | Standard deviation |
---|---|---|
50 | 0.638595221 | 0.060644109 |
100 | 0.684115767 | 0.035342407 |
150 | 0.696934409 | 0.028088683 |
200 | 0.68054112 | 0.023215712 |
250 | 0.709194429 | 0.031983543 |
300 | 0.70440341 | 0.023225232 |
dBO | 0.746661792 | 0.00340882 |
Steps | Mean | Standard deviation |
---|---|---|
50 | 0.623391013 | 0.056150683 |
100 | 0.65012841 | 0.039877341 |
150 | 0.657217419 | 0.046396371 |
200 | 0.679789337 | 0.005860135 |
250 | 0.678678903 | 0.005974927 |
300 | 0.676195812 | 0.00793658 |
dBO | 0.692554458 | 0.003021649 |
Steps | Mean | Standard deviation |
---|---|---|
50 | 0.623391013 | 0.056150683 |
100 | 0.65012841 | 0.039877341 |
150 | 0.657217419 | 0.046396371 |
200 | 0.679789337 | 0.005860135 |
250 | 0.678678903 | 0.005974927 |
300 | 0.676195812 | 0.00793658 |
dBO | 0.692554458 | 0.003021649 |
Besides the comprehensive ability metric, capabilities in Eq. (9) and influence in Eq. (11) can also be applied individually as the criteria to perform design optimization based on specific interests. In addition, the second-order ability in Eq. (15) can also be used as the optimization criterion. The respective optimum networks based on these three criteria for node 0 in the 20-node example are shown in Fig. 7. It is seen that different criteria lead to different optimum networks. The capabilities and influence criteria result in two different set of optimal nodes, given that two different types of information (source nodes versus destination nodes) are applied in calculating the trustworthiness in Eqs. (9) and (11). When the ability metric in Eq. (13) is used where both types of information are combined, the assessment of trustworthiness will be more comprehensive. The most trustable nodes, as seen in Fig. 5(b), are reduced to the ones that appear in both of the previous optimum networks. Some nodes become less trustworthy when more information is considered. The second-order ability is calculated with more information where the abilities of the destination nodes are more influential. Therefore, the result of the second-order ability is different from that of the first-order one.
5.2 Benevolence as the Optimization Criteria.
In the 20-node-192-edge example, the optimum networks for node 0 with the benevolence criteria are shown in Fig. 8. It is seen when the self-interest weight w0 is lower, it is easier to build a larger trustworthy network. The obtained most trustable networks in Fig. 8 based on the benevolence criteria are different from the one in Fig. 5(b) based on the ability criteria. The only common trustworthy node is node 13 between Figs. 5(b) and 8(a), and is node 15 between Figs. 5(b) and 8(b) in the more “selfish” modes of benevolence. For the more “altruistic” mode in Fig. 8(c), there is no node that is trustworthy measured by both benevolence and ability. Therefore, competitions and conflicts exist when different criteria of ability and benevolence are applied. If multiple criteria are considered simultaneously, multi-objective optimization methods are needed to identify the Pareto solutions and make tradeoffs.
6 Concluding Remarks
In this paper, quantitative trustworthiness metrics are used as the design criteria to perform optimization of cyber–physical–social system networks. Each node can choose its own most trusted strategic network so that they can collaborate and share information. The trustworthiness is quantified as multi-faceted quantities in both cyber and social spaces, including the dimensions of ability, benevolence, and integrity. In CPSS, the ability and benevolence can be calculated based on statistics from their working history to measure the capacities of information gathering, reasoning, and information sharing. The most trusted strategic network for a node is the subnet that maximizes the ability of the node if ability is used as the criterion. A node that has the high capacities of observing the state of world accurately, making sound decisions based on available information, and bringing positive impacts to others is deemed to possess a high level of ability and thus a trustworthy individual. Similarly, a node that is willing to share accurate information with others is also regarded as trustworthy. The strategic network is the one that leads to the maximum level of ability for the reference node, or consists of a group of collaborators that are the most willing to collaborate with the reference node.
Our previous study [2] showed that the new quantitative metrics of ability and benevolence are sensitive to trust attacks. It was seen that when a malicious node generates false predictions and sends them to other nodes, its perceived trustworthiness will drop quickly when measured by ability and benevolence. When the attack stops, the perceived trustworthiness will gradually increase and recover. This matches well with human social behaviors. It usually takes time to establish a trust relation, whereas the damage can be done much more quickly. When designing the trusted strategic network, the risks of attacks also need to be considered. Instead of targeting at the maximum trust level as shown in this paper, additional criteria for robustness need to be incorporated in future work.
The proposed discrete Bayesian optimization performs reasonably well for the combinatorial problem of network design, where search efficiency is improved and variability of results can be reduced. For the kernel function based on the Hamming distance, more hyper-parameters can help increase the flexibility of the kernel, whereas a small number of hyper-parameters is not robust enough for optimization. The limitation of using multiple hyper-parameters is the training efficiency. More samples are required to train a larger number of hyper-parameters, which makes it not feasible for small problems. Combinatorial problems usually have very large searching space. Introducing additional hyper-parameters can potentially bring the benefit of faster convergence.
In this work, only single-objective optimization is applied. The multi-faceted trustworthiness metrics eventually will need a multi-objective optimization approach [57] for trust-based design, where multiple metrics are considered simultaneously and tradeoffs need to be made. The scalability of the discrete Bayesian optimization also requires further investigation, given that the Bayesian update procedure in GPR is computationally expensive when the number of samples is large. The proposed scheme for large-scale networks will require further tests. Enhancement such as sparse GPR is likely to bring better scalability.
Footnote
A shorter version of the paper was presented at ASME IDETC/CIE2020 as Paper No. IDETC2020-22661.
Acknowledgment
This work was supported in part by the National Science Foundation under grant CMMI-1663227.
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.