Abstract
Teams are common throughout engineering practice and industry when solving complex, interdisciplinary problems. Previous works in engineering problem solving have studied the effectiveness of teams and individuals, showing that in some circumstances, individuals can outperform collaborative teams working on the same task. The current work extends these insights to novel team configurations in virtual, interdisciplinary teams. In these team configurations, the whole meta-team can interact, but the sub-teams within them may or may not. Here, team performance and process are studied within the context of a complex drone design and path-planning problem. Via a collaborative research platform called HyForm, communication and behavioral patterns can be tracked and analyzed throughout problem solving. This work shows that nominally inspired sub-structured teams, where members work independently, outperform interacting sub-structured teams. While problem-solving actions remain consistent, communication patterns significantly differ, with nominally inspired sub-structured teams communicating significantly less. Questionnaires reveal that the manager roles in the nominally inspired sub-structured teams, which are more central in communication and information flow, experience a greater cognitive and workload burden than their counterparts in the interacting sub-structured teams. Moreover, members in the nominally inspired sub-structured teams experience their teams as inferior on various dimensions, including communication and feedback effectiveness, yet their performance is superior. Overall, this work adds to the literature on nominal versus interacting problem-solving teams, extending the finding to larger, interdisciplinary teams.
1 Introduction
Ongoing research efforts in the engineering design community focus on teamwork and collaboration [1–3]. Provocative results are emerging surrounding the notion of groups of individuals (i.e., nominal teams) outperforming collective team problem solving. This raises the critical question of whether and when teams are truly optimal, or at least better performing than individuals [4,5]. The engineering design community has not been the first to illuminate such findings. Studies in the social psychology literature indicate that nominal teams can outperform idea-generating groups during brainstorming activities [6–8]. These findings contrast sharply with the plethora of benefits that teams offer to the problem-solving process, which stems from a diversity of perspectives and expertise that teams provide [9–11]. The current work is driven by the tension between these findings: How can we harmonize the efficiencies of individual problem solving with the benefits that teaming provides?
To date, the focus within the study of teaming has focused on completely interacting, or unconstrained, teams versus a team of individuals [12–14]. However, this current work takes a novel approach to team configurations, focusing on different sub-team structures within an interacting meta-team. In other words, while the whole team interacts, the sub-team disciplines within the broader team may not. Thus, the contribution here is moving toward an interdisciplinary team whose sub-teams may or may not work on their parts of the task together. Our specific team architecture is studied within a previously used experimental platform called HyForm, which joins different disciplines during a complex design task [15]. Because the meta-team is not disciplinarily homogeneous, the sub-teams need to constantly exchange developing information from the other disciplines within the team. Here, homogeneous and interdisciplinary refer to the defined roles in this experiment rather than the background of the team members themselves. The exchange of information across disciplines is done through a central problem manager who becomes the mediator between disciplines. The design context stages a complex interdisciplinary problem, conducted virtually via a collaborative research platform. Within this research platform, reconfigurable communication channels enable restriction of team interactions, resulting in our unique nominally inspired, sub-structured teams.
Accordingly, the following research questions (RQs) underline this work:
RQ1: Are nominally inspired or interacting sub-structured teams more effective during a complex, interdisciplinary design task?
RQ2: How do nominally inspired versus interacting sub-structured teams impact the problem-solving behaviors of the overall meta-team?
Teams are ubiquitous in problem-solving scenarios and engineering practice—in operations logistics and planning, product design, software, aerospace, and applications are endless. The hope is that this research will critically challenge the assumptive notion of team superiority and drive more innovative and strategic decision-making principles of team construction.
2 Background
2.1 The Nominal Team—Comparing Teams and Individuals in an Experimental Context.
The study of teams within the engineering design community spans across human subject studies and computational simulations. Nominal teams are often used throughout the literature to compare individuals and teams within experimental contexts [5,16–18]. In general, a nominal team refers to a group of participants who work individually, without communication or collaboration, wherein the best individual solution is selected as the team solution. That is, nominal teams are experimentally created artifacts, generated by randomly pairing or grouping individuals who worked on the task alone. Nominally inspired sub-teams are a more realistic analog to the related experimental artifact, and an important aspect of this current work as the manner to compare individual and interacting problem solving.
Work in engineering and related areas has supported the notion that nominal teams can outperform collaborative teams. For example, on a conceptual engineering design task, Gyory et al. showed that groups of individual problem solvers can produce better solutions than interacting teams, even when those teams are guided by a process manager [5]. Similarly, in data science competitions in Kaggle, simulations of groups of individuals also outperformed interacting teams [19]. However, this tendency does not necessarily hold true across all contexts. For example, in the collaborative computer-aided design, Phadnis et al. showed that interacting pairs of programmers design higher quality models [20]. Due to coordination and inefficiencies within pairs, on a per-person basis, individuals were much quicker. This result provides evidence that efficiency of the interactions and collaboration processes of the teams can dictate their performance level.
Communication and other interactions among members are a defining feature of team versus individual problem solving [21–23]. Effective and cohesive communication can lead to common and shared mental models and better overall performance among teams [24,25]. Because communication is such a critical mode of team interaction behaviors, communication, or the lack thereof, is leveraged to differentiate the sub-team structures for this work. The experimental platform for this study, discussed in Sec. 2.2, allows for this reconfigurable communication channels among team members.
2.2 A Collaborative Research Platform—Hyform.
HyForm2 is a collaborative research platform created through a partnership among researchers at the Carnegie Mellon University (CMU), the Pennsylvania State University (PSU), and the PSU Applied Research Laboratory (ARL) [15]. The platform simulates a complex, interdisciplinary design problem, partnering drone designers, path planners, and business planners. A valuable tool for studying problem-solving behaviors, HyForm can track all actions and communication among team members throughout a study session. The current work uses HyForm capabilities to restrict and study team members’ interactions while designing a complex engineered system.
The HyForm platform contains three distinct modules: drone design, operations, and business plan. All team members are assigned a discipline (i.e., sub-team), and each discipline works in their respective module in HyForm. Design specialists use the drone design module to build and evaluate drones. Design specialists can arrange components such as rods, batteries, and airfoils to create different drone configurations. Then each drone is evaluated to determine its cost, range, velocity, and payload. Operations specialists work in the operation module to create and assess delivery routes for a specific target market using the drones provided by the design specialists. The problem manager uses the business plan module to select customers, define the operation specialists’ market, and choose the most profitable plan from their team.
Additionally, the problem manager monitors the overall team performance and facilitates communication between the design and operations specialists. HyForm contains a text chat interface that allows communication between team members. As mentioned earlier, the chat tool is reconfigurable such that experimenters can restrict or expand team communication between team members, enabling the ability to handle different team structures. This facilitates the study of both interacting sub-structured teams and nominally inspired sub-structured teams, which is the primary focus of this research. See Ref. [26] for figures of the modules within HyForm.
2.3 Cognitive Workload and Stress.
Solving complex design problems requires and creates high levels of mental and cognitive workloads, which is defined as the difference between the cognitive demands of the task and the attentional resource capabilities afforded from an individual [27–29]. This increase in the cognitive workload can also exacerbate stress [30–32]. Particularly with tasks that are dynamically changing, uncertain, and contain large solution spaces, like the task emulated within the HyForm platform, designers must be able to remain agile, reacting to both internal and external pressures during the problem-solving process [33,34]. Not only do these high levels of cognitive workload and stress adversely affect health but they can also lead to inferior performance and efficiencies [35]. We are particularly interested in studying cognitive workload and the differences experienced from two different team structures presented here. For example, is the cognitive workload and stress heighted when members working on the same part of the problem cannot directly collaborate with each other? Due to the inherent connection between these measures and team performance, these are critical to analyze when considering such different team structures.
The NASA-TLX, or NASA Task Load Index, is a common assessment tool to measure cognitive workload along a multidimensional scale [36]. Many subdimensions are ascertained from the tool, including the specific factors of mental demand, temporal demand, performance, effort, stress, discouragement, insecurity, and frustration. Previous studies have validated the assessment tool to measure both cognitive experience and stress. The NASA-TLX has exhibited robust sensitivity across a wide variety of human factors studies, both simulated and live tasks, to test for team effectiveness and performance [31,32,37–39]. These reasons make it an ideal test for measuring the impacts of different team structures on how members work together and perceive their experiences on the team.
The NASA-TLX measures several subdimensions across the assessment, each of which is defined as follows [40]. Mental demand studies how much cognitive and perceptual activity is required for a task, describing the task as easy or demanding, simple or complex, and the amount of thinking and deciding involved. Physical demand relates to the amount of physical activity required, characterizing the task as easy or demanding, slack or strenuous. Describing the task as slow, rapid, or frantic, temporal demand tests the amount of time pressure felt during the task due to the pace at which the tasks or task elements occurred. An individual's overall performance level queries how successful they thought of themselves achieving success on the task and goals of the problem, essentially measuring their satisfaction with themselves. Effort tests both the mental and physical amount of work required to accomplish the level of performance on the task. Finally, frustration level queries the amount of irritation, stress, and annoyance experienced while completing the task complacency to how contention, relaxation, and complacency felt. Taken together, these dimensions measure the mental workload of individuals and provide insight into the complexity of task, performance, and efficiency.
2.4 Virtual/Distributed Teams and Collaboration.
Another major facet of this work is the distributed nature and virtuality of the design teams. Unlike co-located teams, distributed teams depend on communication as the main mode to interact and collaborate with one other [41]. Several characteristics are associated with the virtuality of teams—geographic dispersion, electronic dependence, structural dynamism, and national diversity [42]. From these factors, the two most relevant here are electronic dependence and structural dynamism. In this work, team members can collaborate by one of the two electronic ways: directly chatting through communication channels within the experimental platform and/or sharing their design progress with another team member. The virtual nature of collaboration here enables us to manipulate and structure dynamism as well—by directly controlling the team structures and if members can chat/share designs with other team members.
Conflicting evidence exists to the benefits and shortcomings of distributed teaming. Several researchers and studies identify that direct interactions with fellow coworkers or team members associated with colocation improve creativity and innovation via the sharing of tacit knowledge [43]. Even the ad hoc encounters with team members increase the sharing of ideas [44]. However, others posit that virtual collaboration is much more complex and can explicitly create the space and time for individual brainstorming and thinking and can increase creativity [45]. Regardless, distance is negatively correlated with communication frequency, and it is well known that communication is correlated with team performance [46,47].
Communication and trust are critical and interrelated factors for distributed teaming, which may also be fragile or temporal over time [48]. The virtuality of teams necessitates the need for different modes of communication—such as email, instant messaging, and other computer-mediated methods [49]. Just like the findings on creativity and innovation, overall, there are also inconsistent findings related to communication between virtual and face-to-face interactions [50–52]. These inconsistencies generally arise from the complexities of virtual collaboration and the continuum of which it exists [53]. There are many ways to study communication including frequency, content, quality, timeliness, and closed-loop communication [54–58]. Communication frequency is most impactful in the early stage of team formation and norming to contribute to the development of teams, as the potential for building trust and a common understanding of the problem will increase with more frequent interactions [59]. As a result, communication frequency is the way we study communication here.
3 Methodology
This section presents details regarding the methodology of the experimental design. First, Sec. 3.1 discusses the participants and the two team conditions used: the interacting sub-structured and the nominally inspired sub-structured team configurations. Then, Sec. 3.2 provides details regarding the design task, including the problem shock introduced midway through problem solving. Finally, Sec. 3.3 progresses through the timing of the 65-min experiment.
3.1 Participants and Experimental Conditions.
In total, 105 individuals participated in the study. These participants were mechanical engineering students recruited within similar mechanical engineering design classes from CMU and PSU in the United States to control for similar levels of engineering education at the university level. The study was approved by the Institutional Review Board at CMU, and all participants read and signed a consent form before partaking in the study. Following full completion of the experiment, they were compensated via an Amazon gift card, proportionate to $10 per hour of their time spent. Conducted entirely online, participants interacted with the experimenter and all aspects of the study virtually through the HyForm platform.
All individuals were randomly assigned to one of two experimental conditions, an interacting sub-structured team or a nominally inspired sub-structured team. Each team consisted of five members. The main difference between the two team conditions (as shown in Fig. 1) relates to how team members interacted. The direct lines of communication are indicated by the solid arrows in the figure. Because the meta-team is not homogeneous with respective to their defined roles, the sub-teams need to constantly exchange evolving information from the other disciplines within the team. This exchange is done through the central problem manager who becomes the mediator between the disciplines formed by each sub-team. The experiment was completely virtually via the HyForm platform. Accordingly, text-based communication channel-enabled team members to communicate with one another, and they did not know each other's identity.
In the interacting sub-structured condition, the two design specialists could chat with each other, the two operations specialists could chat with each other, and each discipline could directly chat with the problem manager. When the design specialists wanted to relay information to the operations specialists, they needed to go through the problem manager. The design specialists could also see each other's submitted drone designs, and the operations specialists could see each other's submitted delivery plans, thereby sharing design progress in the sub-teams.
However, in the nominally inspired sub-structured condition, while the communication lines existed between each member and the problem manager, members within the same discipline could not directly communicate. The line of communication between the design specialists was severed as well as the line between the operations specialists. Moreover, the design specialists could not see each other's submitted drone designs, and the operations specialists could not see each other's submitted delivery plans. In this team structure, team members worked on their own designs without communication and collaboration from their fellow members, submitting their work directly to the problem manager. In this manner, the team structure mimics a nominal team, where team members work solely. Even so, each individual in one discipline can still obtain information from the other discipline via the problem manager. Like the team condition, participants were randomly assigned to one of these roles on the team.
Altogether, the final data collection consisted of 10 interacting sub-structured and 11 nominally inspired sub-structured teams.
3.2 Design Task.
Provided with an initial budget to build and operate a drone fleet, teams attempted to maximize their profit. They chose customers to deliver to from a customer map and received profit based on the distribution of different packages delivered. On a team, the drone specialists built and modified drones through the drone design module, while the operations specialists selected from these drones to create drone fleets and designed path plans among customers on the customer map. Sample actions for the drone designers within HyForm include adding drone components (motors, batteries, airfoils, etc.), increasing/decreasing the size of components, or moving components. Sample actions for the operations specialists include adding/removing delivery paths from point A to B and selecting completed drones designs. Consequently, the local objectives for the drone designers are the range, payload, cost, and velocity of the drones, while the local objectives for the operations specialists are the cost and amount of food/packages delivered to customers.
When drones and delivery routes were created, these plans were sent to the problem manager, whose objective was to select final plans for submission. The teams’ performance was measured by the submissions with the highest achieved profit. Plan profits consider the overall costs from the disciplines relative to the amount of food and package deliveries to customers. The problem task is complex and requires both parallelization of subtasks and communication within a team. Each of the three disciplines has their own specialized knowledge of design variables and constraints, and thus, these must be effectively communicated and worked on simultaneously within the time allotted to perform well.
While many aspects of the experimental architecture were similar to previous studies conducted in HyForm by the authors [60,61], a new problem shock was introduced for this work. During the second half of problem solving, team members were notified of specific restrictions and constraints. These constraints, along with the drone design and operations specialists’ modules, are depicted in Fig. 2. In terms of the constraints, the design specialists experienced a physical wall representing the hangar space, which created a geometrical constraint for drone designs (Fig. 2(a)). Second, operation specialists had a no-flight area on the customer map, which created an obstacle to work around in the deliverable routes (Fig. 2(b)).
3.3 Design Study Timeline.
The beginning of the experiment started with 15 min of prestudy materials. This included reading and signing the consent form, reading the problem brief, and filling out a prestudy questionnaire. The problem brief provided details related to the design task and the assigned roles. Each discipline on a team (drone specialist, operations specialist, and problem manager) had a distinct problem statement for their role. The prestudy questionnaire queried individuals on their experiences related to certain aspects of the experiment, such as building drones, business/operations planning, and computational expertise (this questionnaire was going to be used to form teams, but the results showed that individuals had very limited experiences/exposure in these areas). Following this, participants went through a 10-min, guided tutorial. The guided tutorial accustomed team members to the HyForm platform—their respective disciplines’ HyForm module—and the communication channels. While not explicitly checking or testing their working knowledge afterward, the tutorial guided them through a set of tasks pertaining to their role that they experienced during the actual task.
After completing all pre-session materials, the first problem-solving session commenced. Teams were given 20 min to work through the initial problem statement (maximize team profit with a certain budget to build drones and routes). The experimenter reminded the problem manager to submit their teams’ best plan by the end of the session. All plans sent to the problem managers contained corresponding profit values, so discretion of the best plan on the problem manager was not significantly subjective. Afterward, team members completed a short, mid-study questionnaire and were provided with a 3-min break to either take a rest from their computer or review tutorial materials. After the break, the second problem-solving session commenced. This session was the same as the first except with the additional constraints (Fig. 2) to the problem—restricted sizes of drones and obstacles in the customer markets. Again, the problem manager submitted the best plan of their team by the end. After this second, 20-min session, participants filled out a poststudy questionnaire.
Both the mid-study and poststudy questionnaires included questions from the NASA task load index (NASA-TLX) survey, which as discussed in-depth in the previous discussion, evaluate participants’ experiences about their mental/temporal demand, performance, effort, stress, frustration, and other attitudes while working through the problem-solving sessions [36]. In addition to these subdimensions, three others were integrated into the assessment: stress, insecurity, and discouragement. The integration of these measures followed a similar methodology to Nolte and McComb [38]. These subdimensions are combined to represent an overall mental workload measure and a cognitive experience measure. The cognitive experience measure consists of an equally weighted average of the subdimensions of mental demand, temporal demand, performance, effort, and stress. The mental workload measures an equally weighted average of mental demand, temporal demand, stress, discouragement, frustration, and insecurity. These measures, rather than their corresponding subdimensions, are compared between team structure conditions. In addition to the NASA-TLX questions that assess at the individual level, supplementary questions queried participants about more global team characteristics, such as their overall teams’ effort, goals, quality of work, collaboration, and communication [62–64]. The aforementioned assessments and questions utilized different rating scales and scale types. For ease, these will be discussed alongside their corresponding results in the succeeding sections.
4 Results
The results are broken into three main sections. First, Sec. 4.1 compares the performance, or profit achieved, between the interacting sub-structured and nominally inspired sub-structured team configurations. Second, in Sec. 4.2, problem-solving behaviors are compared. Behaviors include both communication and action counts at the team and discipline levels. Finally, Sec. 4.3 presents findings from the questionnaires related to perceptions of workload and the cognitive experience of team members. The statistical tests presented in Secs. 4.1 and 4.2 were run via Mann–Whitney U-tests for smaller sample sizes, while the statistics in Sec. 4.3 were run via standard t-tests.
4.1 Team Performance.
The first comparison between the two team configurations shows overall team performance. Due to the high degree of dependency between team member roles and achieving profitability, profit serves as the primary measure of team performance. Recall that during each problem-solving session, the problem manager continually submitted plans on behalf of their teams. The plan with the maximum profit that a team achieves during each session is tracked and averaged.
Figure 3 shows the average maximum profits across the two team conditions. By using a Mann–Whitney U-test, overall, across both sessions, the nominally inspired condition achieves a significantly higher profit than the interacting condition (p < 0.038, z = 2.08). When comparing each problem-solving session individually, the largest difference occurs in the first session. These results support the underlying basis of superior outcomes from nominal teams. Here, when team members working on similar aspects of the problem are not allowed to directly communicate (chat) or collaborate (share designs), they perform better. Whether this can be attributed to more controlled communication or another aspect of process loss in teams is explored next by examining the problem-solving behaviors across teams.
4.2 Problem-Solving Behaviors.
In terms of problem-solving behaviors, two global metrics are analyzed: communication and action. The motivations underlying these two metrics are twofold. First, HyForm tracks both types of these behaviors over time, and because the teams are distributed/virtual, a team member can only either act or communicate. Thus, tracking these two measures allows complete reconstruction of the entire problem-solving process of a team. Second, a previous study by the authors revealed tradeoffs between time allocated toward communicating and time allocated toward acting, particularly following a problem shock like the one presented in this work. Thus, these tradeoffs can indicate insights into the cognitive allocation strategies of the different types of teams [61]. Communication count (or communication frequency) represents the cumulative number of messages from one team member to another, irrespective of the content within the message. Similarly, the action count is defined as any distinct action taken by a member within their respective module in HyForm. The counts for communication depend only on the originator of the message rather than the receiver.
Via Mann–Whitney U-tests, at the team level, the interacting sub-structured condition communicates significantly more than the nominally inspired substructure (p < 0.023, z = −2.27), when combining both problem-solving sessions. Looking at each session separately, the higher communication within the interacting sub-structured teams primarily holds true for the first problem-solving session (p < 0.041, z = −2.04) rather than the second (p < 0.25, z = −1.16). However, a similar trend exists in both conditions, where teams tend to communicate more following the problem shock. While perhaps not directly surprising, these results support the notion that the mediated communication channels of the nominally inspired sub-structured teams to allocate less cognitive resources to communicating.
Figure 4 examines communication at the discipline level, showing that the difference at the team level is profoundly driven by the drone designers. In fact, the only significant difference between the two team conditions is within the drone designers (p = 0.003, z = −2.98), rather than within the operations specialists (p = 0.54, z = −0.62) or the problem manager (p = 0.14, z = −1.48). Furthermore, comparing the trend across problem-solving sessions among the disciplines, the problem managers exhibit the steepest increase, or impact, from the problem shock between sessions. This indicates that while teams relied more on the problem manager after the problem switch, it did not materialize differently between the two team structures.
Next, the action count is compared between the two team conditions. Here, the action count is the average number of design changes, regardless of what specific action is taken. Figure 5 shows the average action count per team condition for each discipline. Results show that there is no significant difference between team structures (p = 0. 0.25, z = 1.16; p = 0.72, z = −0.37; p = 0.76, z = 0.303, respectively). Thus, while the nominally inspired sub-structured teams dedicated less cognitive effort to communication, this did not directly translate to more design action effort. It could be the case that the additional effort allowed members in that team structure to slow down and think more, while not being distracted by additional communication.
4.3 Team Members’ Experience
4.3.1 Cognitive and Workload Experience.
The last analyses explore the mid-study and poststudy questionnaires conducted with team members. The questions are based on the NASA-TLX survey to gain insights into team members’ cognitive demand and workload perceptions while working throughout on the task. Participants rated specific measures on a sliding, numerical scale from 0 to 100, with corresponding bounded visual scales from very low to very high. The higher rating on the performance scale indicates the better a member thought they performed, and a higher rating on the stress scale indicates a larger amount of stress a member experienced. The other dimensions follow the same logic. Additional queries asked participants about features related to the entire team itself, including the teams’ productivity, effort, and whether the team came to a consensus. Participants answered these on an ordinal scale bounded between “Very inaccurate” to “Very accurate” with seven options total.
Figure 6 shows the overall cognitive experience by the team member role, and Fig. 7 shows the overall mental workload by the team role. An intriguing result emerges when comparing the problem managers to other members on the team across both measures. The problem managers face a significantly greater amount of both cognitive experience (p < 0.001) and mental workload (p = 0.002) in the nominally inspired sub-structured condition during the first problem-solving session. Diving into the underlying subdimensions of these measures, the problem managers face a greater amount of both stress (p < 0.001) and frustration (p < 0.001) in nominally inspired sub-structured condition. The operations specialists are the members who face some of the lowest levels in the nominally inspired sub-structured team. Generally, the interacting sub-structured condition does not see as much variability in these measures across team roles as do the members in the nominally inspired sub-structured condition. These results are interesting because the structure inspired through nominality, while lowering the burden of extra communication, made team members much more sensitive to stress, particularly the problem managers. However, the imposed stress did not reach a level to hinder performance.
4.3.2 Team Dynamics Experience.
In addition to the workload and cognitive experiences, questions also further queried team members on the dynamics of the entire team. Table 1 shows a subset of the list of questions. These specific questions are chosen from the broader set due to their relevancy in understanding perceptions of team behaviors and interactions, which are to be impacted by the team structure differences. Questions were answered on ordinal scales representing seven discrete options, bounded between “strongly disagree/inaccurate” and “strongly agree/accurate.” The exact rating categories are shown in the subsequent figures (Fig. 8 and 9). To quantitatively analyze the differences, the categories are converted to a numerical scale (from 1 (low) to 7 (high)), averaged, and tested via a paired, two-tailed t-test.
Question | Response type |
---|---|
“Team fulfills its mission” | Accurate/Inaccurate |
“Team accomplishes its objectives” | Accurate/Inaccurate |
“Team meets the requirements” | Accurate/Inaccurate |
“Team achieves its goals” | Accurate/Inaccurate |
“Team is productive” | Accurate/Inaccurate |
“Team is efficient” | Accurate/Inaccurate |
“Team communicates effectively” | Accurate/Inaccurate |
“Team has a clear group structure” | Accurate/Inaccurate |
“Team easily comes to a consensus” | Accurate/Inaccurate |
“Team gives effective feedback” | Accurate/Inaccurate |
“Team makes decisions easily” | Accurate/Inaccurate |
“Team participates equally” | Agree/Disagree |
“Members on this team are clear about their roles” | Agree/Disagree |
“Team members are cooperative” | Agree/Disagree |
“Subgroups are necessary” | Agree/Disagree |
Question | Response type |
---|---|
“Team fulfills its mission” | Accurate/Inaccurate |
“Team accomplishes its objectives” | Accurate/Inaccurate |
“Team meets the requirements” | Accurate/Inaccurate |
“Team achieves its goals” | Accurate/Inaccurate |
“Team is productive” | Accurate/Inaccurate |
“Team is efficient” | Accurate/Inaccurate |
“Team communicates effectively” | Accurate/Inaccurate |
“Team has a clear group structure” | Accurate/Inaccurate |
“Team easily comes to a consensus” | Accurate/Inaccurate |
“Team gives effective feedback” | Accurate/Inaccurate |
“Team makes decisions easily” | Accurate/Inaccurate |
“Team participates equally” | Agree/Disagree |
“Members on this team are clear about their roles” | Agree/Disagree |
“Team members are cooperative” | Agree/Disagree |
“Subgroups are necessary” | Agree/Disagree |
Table 2 presents the questions with significant differences between the nominally inspired (uN) and interacting (uI) sub-structured teams. The table presents the means, standard deviations, and p-values for session 1 (s1) and session 2 (s2). Results show that during the second problem-solving session, the nominally inspired teams perceive their teams as significantly less efficient (p = 0.048) and their team having a less clear group structure (p = 0.057). Across both problem-solving sessions, the nominally inspired sub-structured teams deem their teams as having less effective feedback (p = 0.027 and p = 0.049), less effective communication, (p = 0.022 and p = 0.057), and less equal (p = 0.007 and p = 0.001) and cooperative (p = 0.040 and p = 0.019) participation. Figure 8 dives deeper into these latter four dimensions. Overall, while the nominally inspired substructure outperforms the interacting substructure in terms of performance, there are additional perceived downstream effects not necessarily related to how the team performs, but related to team operations (i.e., team process). These results highlight that the nominally inspired substructure can have negative, consequential impacts on how members feel supported in the team, whether that is stress or perceptions of how their team works together. While these negative perspectives exist, they did not have a detrimental impact on performance.
Question | Response type | Means/STD (s1) | Means/STD (s2) | P-value (s1) | P-value (s2) |
---|---|---|---|---|---|
“Team is efficient” | Accurate/Inaccurate | p = 0.749 | p = 0.048 | ||
“Team communicates effectively” | Accurate/Inaccurate | p = 0.022 | p = 0.057 | ||
“Team has a clear group structure” | Accurate/Inaccurate | p = 0.336 | p = 0.057 | ||
“Team gives effective feedback” | Accurate/Inaccurate | p = 0.0270 | p = 0.049 | ||
“Team participates equally” | Agree/Disagree | p = 0.007 | p = 0.010 | ||
“Team members are cooperative” | Agree/Disagree | p = 0.040 | p = 0.019 |
Question | Response type | Means/STD (s1) | Means/STD (s2) | P-value (s1) | P-value (s2) |
---|---|---|---|---|---|
“Team is efficient” | Accurate/Inaccurate | p = 0.749 | p = 0.048 | ||
“Team communicates effectively” | Accurate/Inaccurate | p = 0.022 | p = 0.057 | ||
“Team has a clear group structure” | Accurate/Inaccurate | p = 0.336 | p = 0.057 | ||
“Team gives effective feedback” | Accurate/Inaccurate | p = 0.0270 | p = 0.049 | ||
“Team participates equally” | Agree/Disagree | p = 0.007 | p = 0.010 | ||
“Team members are cooperative” | Agree/Disagree | p = 0.040 | p = 0.019 |
5 Discussion
This work studies two different substructures of teams. While the meta-teams are interacting, their substructures are not necessarily. On the one hand, the interacting sub-structured configuration consists of members within the same discipline who can communicate and access each other's completed designs. On the other hand, the nominally inspired sub-structured configuration consists of members within the same discipline who cannot directly communicate with one another nor see each other's completed designs. In essence, the disciplines in this latter structure are a nominal team or a team of individuals working alone. They submit their parts of the task to the problem manager without direct feedback from their counterparts, and then the problem manager chooses the final designs.
Comparing team performance, the nominally inspired sub-structured teams perform significantly better than the interacting sub-structured teams in terms of the profits of their final plans. This trend in the performance level is seen across both problem-solving sessions, though most prominent in the first session. Recall a problem shock is introduced between problem-solving sessions that added constraints on the design spaces. The trends show that the nominally inspired sub-structured teams reach higher performance levels in the first session and maintained those levels in the second, after the shock. On the other hand, the interacting sub-structured teams are shown to improve their performance between sessions, though still not reaching the levels achieved by the nominally inspired sub-teams.
The results from the team behaviors show that while teams act equally, the interacting sub-structured groups communicate significantly more than the nominally inspired groups. While this result might at first seems trivial, pairing this result with performance shows that more frequent discourse may add an additional burden onto teams, shifting their cognitive efforts from designing to talking, leading to inferior performance. Results from previous studies in the psychology literature support such a claim. Verbalization of one's own ideas is shown to have a significant detrimental impact on outcomes, and that communication follows a curvilinear relationship, where too little or too much can be detrimental [65,66]. Of course, all communication cannot be unfavorable. The nominally inspired sub-structured teams’ communication is more directed. All team members direct their communication to the problem manager, who manages not only the teams’ final plans but also the information flow between and among disciplines. This more targeted effort of communication and information flow may be more efficient and effective for team performance.
However, this more targeted communication certainly does not come burden free. Even though the problem managers play an information bridging role between disciplines in both team conditions, the questionnaire data reveal that the problem managers in the nominally inspired sub-structured teams perceive much greater cognitive and workload burdens. These higher burdens are not only more heightened than other members on their teams but also more heightened than the problem managers in the interacting sub-team’s condition. This can be a consequence of the greater centrality of the problem managers in these teams. In the nominally inspired condition, the problem managers not only synthesize final designs but also need to communicate across and within disciplines (via more channels), adding cognitive demand for these managers.
Moreover, members within the nominally inspired sub-structured teams perceive their teams as significantly inferior across various dimensions. Most prominent among these include communication and feedback effectiveness, and equal member participation. These are noteworthy dimensions because from the set of questions, these are most relevant to the teams’ process, rather than team outcomes. For example, team communication could have been perceived as inferior as all members cannot directly interact, which goes hand in hand with the effective feedback. While the problem manager in the nominally inspired substructure should have been providing feedback to the rest of the team, perhaps they cannot provide the needed feedback related to the drone designs or operation plans that members in those disciplines hope for. Yet, these nominally inspired sub-structured teams perform better than their interacting sub-structured counterparts, even though they did not feel as supported.
To bring these implications to a more practical light, imagine a scenario where an engineering design team is working on bringing a new product to market. The team is composed of engineers, who design the product, and a business unit, who are identifying market entry strategies. In practice, there are several different forms that this team can take, such as being divided by functions or being cross functional. The former is the structure studied in this work. For this type of structure to be optimal, instead of the engineers working directly with each other and the marketing team working directly with each other, they would be better off working individually. Then, a central, cross-functional team manager mediates the collection of ideas and coalesces the final product strategy. Our results indicate that this structure of the team may be most optimal in producing the best product launch.
The implications of this research extend, possibly even more opportunistically, to distributed teaming. With advances in the digital age and technology, and as a by-product likely here to stay of the COVID-19 pandemic, teamwork, communication, and collaboration are all taking reimagined forms in the workforce, often through computer/technology-mediated forms [67,68]. For example, distributed product development teams are becoming more prevalent. The results here indicate that less and more targeted communication is better. With the technological reliance for interaction of distributed teams, these technologies can start to direct and restrict members’ communication frequencies and patterns to improve team effectiveness.
The important takeaway from this work is that, even in interdisciplinary teams, structuring the homogeneous substructures in a nominally inspired manner, with individuals solving their tasks alone, delivers results that are superior to having interacting teams within the disciplines. This work notably extends the emerging set of findings that nominal design teams appear to be superior to interacting design teams. This work has implications for how industry could alternatively structure their teams in practice and how engineering instructors could structure teams within educational projects.
6 Limitations and Future Work
There are several limitations of this research that may affect the generalizability of the findings, but at the same time, also open opportunistic directions of the future work. First, the study is run entirely with mechanical engineering students, so the results are not seen across varying levels of professional experience. While this is the case, we do not expect the findings to be significantly constrained to just student groups or just mechanical engineers. The design task is not wholly related to mechanical engineering, and the prestudy questionnaire queried participants for significant prior exposure to drone design/operations experiences, showing none. So, the results are not directly linked to the demographics in this case. Consequently, there is an expectation that these results will generalize across individuals of other backgrounds and experiences. Within human subject studies like this, backgrounds and experiences need to be controlled for, which was done here. It would be interesting to now extend this work to professional mechanical engineering to identify how these findings generalize across disciplines as well as expertise levels.
Second, several results rely on self-assessments as the method of data collection, namely, the assessments on cognitive experience, workload, and general experiences of working in the team. It should be acknowledged that self-assessments like these are subjective and inherently come with limitations. For example, individuals may not always be the best at judging their own abilities or self-reflections and be biased, such as overevaluating or underevaluating their own performance level [69,70]. However, the results here do not rely solely on self-reflections of self-performance, but rather on behavioral experiences, such as stress. Regardless, it is important to note these inherent limitations of self-assessments. Furthermore, it should also be noted that there are many other dimensions of team constructs that have shown to impact performance but were not studied here. This includes trust, psychological safety, information sharing, and even gender diversity [71–74]. The former two of these constructs lend nicely to direct research extensions. For example, it would be interesting to study how the constriction of communication within the nominally inspired sub-teams impacts trust across the different disciplines, especially if they are not able to directly interact or build norms at the beginning of team formation. This restriction of communication may also negatively impact psychological safety, which has been shown to be one of the strongest predictors of team outcomes experiences.
The results presented in this article lay several immediate opportunities for future research. First, being that the main difference between team structures relies on the communication channels, the content of chat among a team may also be critical to effective performance. Currently, only the count of communication is considered. Techniques in natural language processing can examine the cohesion and the content of the discourse, and what type of information is being transferred between members. This can start to determine the quality and the content of the communication and whether this is correlated to performance, which as discussed previously, can also mediate performance. Moreover, the action analysis also only considers action count. The diversity and types of actions being performed can be coded as an additional depth of behavioral data. Another facet of the future work can look at more precise correlations between the questionnaire data and overall team performance. Currently, the results identify insights between the nominally inspired and interacting sub-structured teams, but comparing across higher- and lower-performing teams could reveal further insights on teaming. This also extends to examining the differences in team behaviors, communications, and actions between the higher- and lower-performing teams.
7 Conclusion
This work studies the effects on behaviors and cognition of different substructures on the performance of interdisciplinary teams during a complex engineering task. The substructures embody collaborative, interacting teams and nominally inspired teams of individuals working on their tasks solo. Results show that teams with nominally inspired substructures outperform teams with interacting substructures. In addition, the teams’ communication with nominally inspired substructures is more targeted and efficient, communicating significantly less frequently. However, this nature of interaction can produce an extra burden on managers or mediators that are more centrally focused within this type of communication network. Furthermore, the team perceives this nominally inspired substructure as inferior across several dimensions, including the effectiveness of team communication, feedback, and the equality of contribution among other members. Overall, the results provide insights into the interaction patterns for interdisciplinary teams and the advantage resulting from synergizing the benefits of individual problem solving with interacting teams.
Footnote
Source code for HyForm is available in https://github.com/hyform
Acknowledgment
The authors would like to thank Gary Stump for his discussion on this project. This work was supported by the Air Force Office of Scientific Research (Grant No. FA9550-18-1-0088) and the Defense Advanced Research Projects Agency (Cooperative Agreement N66001-17-1-4064). Any opinions, findings, and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the sponsors. A version of this article has been accepted at the International Design and Engineering Technical Conferences [75].
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.