Actor-Partner Interdependence Model (APIM)
Overview
This tutorial reviews the Actor-Partner Interdependence Model (APIM; Kashy & Kenny, 2000; Kenny, Kashy, & Cook, 2006), which is often used to examine the association (1) between two constructs for two people using cross-sectional data, or (2) between the same construct from two people across two time points.
In this tutorial, we are going to examine the association between verbal and performance ability using measures from first grade and sixth grade. We are interested in simultaneously examining whether (1) verbal ability in the first grade is predictive of verbal ability in the sixth grade, (2) performance ability in the first grade is predictive of performance ability in the sixth grade, (3) verbal ability in the first grade is predictive of performance ability in the sixth grade, and (4) performance ability in the first grade is predictive of verbal ability in the sixth grade.
When working with people, the above points 1 and 2 are often referred to as “actor effects” and points 3 and 4 are often referred to as “partner effects.”
While this example is not a “traditional” dyad - i.e., two distinguishable people - the analytic processes demonstrated here are applicable to the examination of any bivariate relationship.
In addition, the accompanying “APIM_Tutorial_2022August20.rmd” file contains all of the code presented in this tutorial and can be opened in RStudio (a somewhat more friendly user interface to R).
Outline
In this tutorial, we’ll cover…
- Reading in the data and loading needed packages.
- Descriptive statistics for dyadic data.
- Dyadic data preparation.
- APIM model using
nlme
package. - Other resources.
Read in the data and load needed packages.
Let’s read the data into R.
The data set (“wisc3raw_gender”) we are working with contains repeated measures of different assessments from children during grades 1, 2, 4, and 6.
The data set is stored as .csv file (comma-separated values file, which can be created by saving an Excel file as a csv document) on my computer’s desktop.
# Set working directory (i.e., where your data file is stored)
# This can be done by going to the top bar of RStudio and selecting
# "Session" --> "Set Working Directory" --> "Choose Directory" -->
# finding the location of your file
setwd("~/Desktop") # Note: You can skip this line if you have
#the data file and this .rmd file stored in the same directory
# Read in the repeated measures data
<- read.csv(file = "wisc3raw_gender.csv", head = TRUE, sep = ",")
data
# View the first 10 rows of the repeated measures data
head(data, 10)
## X id verb1 verb2 verb4 verb6 perfo1 perfo2 perfo4 perfo6 info1 comp1
## 1 1 1 24.42 26.98 39.61 55.64 19.84 22.97 43.90 44.19 31.287 25.627
## 2 2 2 12.44 14.38 21.92 37.81 5.90 13.44 18.29 40.38 13.801 14.787
## 3 3 3 32.43 33.51 34.30 50.18 27.64 45.02 46.99 77.72 34.970 34.675
## 4 4 4 22.69 28.39 42.16 44.72 33.16 29.68 45.97 61.66 24.795 31.391
## 5 5 5 28.23 37.81 41.06 70.95 27.64 44.42 65.48 64.22 25.263 30.263
## 6 6 6 16.06 20.12 38.02 39.94 8.45 15.78 26.99 39.08 15.402 23.399
## 7 7 7 8.50 16.49 28.71 40.83 4.85 17.24 30.75 41.03 15.380 -1.253
## 8 8 8 14.11 20.92 21.53 25.68 18.72 21.43 33.63 42.36 19.883 6.704
## 9 9 9 15.52 23.36 37.41 45.52 13.37 20.13 35.36 38.53 12.632 13.847
## 10 10 10 20.07 33.38 37.71 48.65 15.26 23.67 42.59 48.39 23.690 25.446
## simu1 voca1 info6 comp6 simu6 voca6 momed grad constant female
## 1 22.932 22.215 69.883 44.424 68.045 51.162 9.5 0 1 1
## 2 7.581 15.373 41.871 44.862 33.897 37.741 5.5 0 1 1
## 3 28.052 26.841 60.424 50.260 35.844 55.477 14.0 1 1 1
## 4 8.208 20.197 52.865 42.669 45.802 35.987 14.0 1 1 0
## 5 15.977 35.417 67.368 86.654 72.368 60.417 11.5 0 1 1
## 6 11.453 20.560 46.437 52.956 22.537 47.716 14.0 1 1 0
## 7 2.318 13.004 53.977 36.341 39.912 35.373 9.5 0 1 1
## 8 14.160 14.868 26.901 33.020 21.679 22.763 5.5 0 1 1
## 9 10.276 23.224 54.737 40.163 36.591 54.803 9.5 0 1 1
## 10 11.416 21.786 45.119 52.232 53.508 53.929 11.5 0 1 1
Subset the data to variables of interest.
# Subset to variables of interest
<- data[, c("id", "verb1", "verb6", "perfo1", "perfo6")]
data
# View the first 10 rows of the data
head(data, 10)
## id verb1 verb6 perfo1 perfo6
## 1 1 24.42 55.64 19.84 44.19
## 2 2 12.44 37.81 5.90 40.38
## 3 3 32.43 50.18 27.64 77.72
## 4 4 22.69 44.72 33.16 61.66
## 5 5 28.23 70.95 27.64 64.22
## 6 6 16.06 39.94 8.45 39.08
## 7 7 8.50 40.83 4.85 41.03
## 8 8 14.11 25.68 18.72 42.36
## 9 9 15.52 45.52 13.37 38.53
## 10 10 20.07 48.65 15.26 48.39
In the data, we can see each row contains information for one child and the multiple time points are contained in the columns. In this data set, there are columns for:
- Child ID (
id
)
- Child’s verbal score during first grade (
verb1
)
- Child’s verbal score during sixth grade (
verb6
)
- Child’s performance score during first grade
(
perfo1
)
- Child’s performance score during sxith grade
(
perfo6
)
Load the R packages we need.
Packages in R are a collection of functions (and their documentation/explanations) that enable us to conduct particular tasks, such as plotting or fitting a statistical model.
# install.packages("ggplot2") # Install package if you have never used it before
library(ggplot2) # For plotting
# install.packages("devtools") # Install package if you have never used it before
require(devtools) # For version control
# install.packages("nlme") # Install package if you have never used it before
library(nlme) # For APIM
# install.packages("psych") # Install package if you have never used it before
library(psych) # For descriptive statistics
# install.packages("reshape") # Install package if you have never used it before
library(reshape) # For reshaping the data (long to wide)
Before diving into the data, we will make a long version (i.e., each repeated measure has its own row) of the data set for later use.
<- reshape(# Select data set
data_long data = data,
# Identify repeated measures variables
varying = c("verb1", "verb6",
"perfo1", "perfo6"),
# Create new variable that represents time
timevar = c("grade"),
# Identify child ID variable
idvar = c("id"),
# Note direction of data reformat
direction = "long",
# No spaces in new column names
sep="")
# For easy viewing - reorder by id and grade
<- data_long[order(data_long$id, data_long$grade), ]
data_long
# View the first 10 rows of the repeated measures data
head(data_long, 10)
## id grade verb perfo
## 1.1 1 1 24.42 19.84
## 1.6 1 6 55.64 44.19
## 2.1 2 1 12.44 5.90
## 2.6 2 6 37.81 40.38
## 3.1 3 1 32.43 27.64
## 3.6 3 6 50.18 77.72
## 4.1 4 1 22.69 33.16
## 4.6 4 6 44.72 61.66
## 5.1 5 1 28.23 27.64
## 5.6 5 6 70.95 64.22
Note how each time point (i.e., grades 1 and 6) now have their own row for each child.
Descriptive Statistics for Dyadic Data.
Before we run our models, it is useful to become familiar with the data via plotting and descriptive statistics.
Let’s begin with descriptive statistics of our four variables of interest: first grade verbal and performance ability, and sixth grade verbal and performance ability.
describe(data$verb1)
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 204 19.59 5.81 19.34 19.5 5.41 3.33 35.15 31.82 0.13 -0.05 0.41
describe(data$verb6)
## vars n mean sd median trimmed mad min max range skew kurtosis
## X1 1 204 43.75 10.67 42.55 43.46 11.3 17.35 72.59 55.24 0.24 -0.36
## se
## X1 0.75
describe(data$perfo1)
## vars n mean sd median trimmed mad min max range skew kurtosis se
## X1 1 204 17.98 8.35 17.66 17.69 8.3 0 46.58 46.58 0.35 -0.11 0.58
describe(data$perfo6)
## vars n mean sd median trimmed mad min max range skew kurtosis
## X1 1 204 50.93 12.48 51.76 51.07 13.27 10.26 89.01 78.75 -0.06 0.18
## se
## X1 0.87
We can see that both the mean and standard deviation of verbal and performance ability increase from first to sixth grade. While this is worth noting, the APIM will not be examining changes in mean differences of verbal and performance ability.
Next, we’ll plot the distributions of each of these variables as well.
ggplot(# Select data set and variable to plot
data = data, aes(x = verb1)) +
# Create histogram of selected variable and
# set color of histogram bar
geom_histogram(fill = "white", color = "black") +
# Label x-axis of histogram
labs(x = "Verbal Ability Grade 1") +
# Plot aesthetics
theme_classic()
ggplot(# Select data set and variable to plot
data = data, aes(x = verb6)) +
# Create histogram of selected variable and
# set color of histogram bar
geom_histogram(fill = "white", color = "black") +
# Label x-axis of histogram
labs(x = "Verbal Ability Grade 6") +
# Plot aesthetics
theme_classic()
ggplot(# Select data set and variable to plot
data = data, aes(x = perfo1)) +
# Create histogram of selected variable and
# set color of histogram bar
geom_histogram(fill = "white", color = "black") +
# Label x-axis of histogram
labs(x = "Performance Ability Grade 1") +
# Plot aesthetics
theme_classic()
ggplot(# Select data set and variable to plot
data = data, aes(x = perfo6)) +
# Create histogram of selected variable and
# set color of histogram bar
geom_histogram(fill = "white", color = "black") +
# Label x-axis of histogram
labs(x = "Performance Ability Grade 6") +
# Plot aesthetics
theme_classic()
Next, let’s examine the association (i.e., rank order stability) among each variable of interest using correlations and a plot.
# Correlations
cor(data[, 2:5])
## verb1 verb6 perfo1 perfo6
## verb1 1.0000000 0.6541040 0.6101379 0.4779672
## verb6 0.6541040 1.0000000 0.6183155 0.6106694
## perfo1 0.6101379 0.6183155 1.0000000 0.6958321
## perfo6 0.4779672 0.6106694 0.6958321 1.0000000
#plot
pairs.panels(data[, c("verb1", "verb6", "perfo1", "perfo6")])
We can see there are strong, positive associations both across time and constructs.
Dyadic Data Preparation.
We have already manipulated the data from “wide” to “long.” Dyadic/bivariate analyses require further manipulation in order to get the data in the correct format for our analyses. We will walk through the data prep in two steps.
First, we need to create one column that has the information for both outcome variables - i.e., for each person, the verb6 and perfo6 values will alternate. This is almost like repeated measures data, but instead of having multiple time points nested within person, we have multiple (two) variables nested within person.
<- reshape::melt(# Select data set
data_melt data = data,
# Identify columns that we want to remain the same,
# that is, the columns that we don't want "long"
id.vars = c("id", "verb1", "perfo1"),
# Do not remove missing data
na.rm=FALSE)
# View the first 10 rows of the data
head(data_melt, 10)
## id verb1 perfo1 variable value
## 1 1 24.42 19.84 verb6 55.64
## 2 2 12.44 5.90 verb6 37.81
## 3 3 32.43 27.64 verb6 50.18
## 4 4 22.69 33.16 verb6 44.72
## 5 5 28.23 27.64 verb6 70.95
## 6 6 16.06 8.45 verb6 39.94
## 7 7 8.50 4.85 verb6 40.83
## 8 8 14.11 18.72 verb6 25.68
## 9 9 15.52 13.37 verb6 45.52
## 10 10 20.07 15.26 verb6 48.65
A little more data management on our newly created data set.
# Rename "variable" and "value" variables to "grade6_variable" and "grade6_outcome"
colnames(data_melt)[4:5] <- c("grade6_variable", "grade6_outcome")
# Re-order for convenience
<- data_melt[order(data_melt$id, data_melt$grade6_variable), ]
data_melt
# View the first 10 rows of the data
head(data_melt, 10)
## id verb1 perfo1 grade6_variable grade6_outcome
## 1 1 24.42 19.84 verb6 55.64
## 205 1 24.42 19.84 perfo6 44.19
## 2 2 12.44 5.90 verb6 37.81
## 206 2 12.44 5.90 perfo6 40.38
## 3 3 32.43 27.64 verb6 50.18
## 207 3 32.43 27.64 perfo6 77.72
## 4 4 22.69 33.16 verb6 44.72
## 208 4 22.69 33.16 perfo6 61.66
## 5 5 28.23 27.64 verb6 70.95
## 209 5 28.23 27.64 perfo6 64.22
Second, we need to create two dummy variables (each 0/1) that will be useful in our analyses to “turn on/off” a row (more on this later). We will create one column that assigns the first row of the double entry data to 1, and we’ll call this “verb_on.” We will create another column that assigns the second row of the double entry data to 1, and we’ll call this “perform_on.”
# Create new variable ("verb_on") that repeats the sequence 1 0
# half the length of the data set (since 2 * half of the rows = all rows)
$verb_on <- rep(c(1,0), times = (nrow(data_melt)/2))
data_melt
# Create new variable ("perform_on") that repeats the sequence 0 1
# half the length of the data set (since 2 * half of the rows = all rows)
$perform_on <- rep(c(0,1), times = (nrow(data_melt)/2))
data_melt
# View the first 10 rows of the data
head(data_melt, 10)
## id verb1 perfo1 grade6_variable grade6_outcome verb_on perform_on
## 1 1 24.42 19.84 verb6 55.64 1 0
## 205 1 24.42 19.84 perfo6 44.19 0 1
## 2 2 12.44 5.90 verb6 37.81 1 0
## 206 2 12.44 5.90 perfo6 40.38 0 1
## 3 3 32.43 27.64 verb6 50.18 1 0
## 207 3 32.43 27.64 perfo6 77.72 0 1
## 4 4 22.69 33.16 verb6 44.72 1 0
## 208 4 22.69 33.16 perfo6 61.66 0 1
## 5 5 28.23 27.64 verb6 70.95 1 0
## 209 5 28.23 27.64 perfo6 64.22 0 1
Please note that this data preparation is probably not the most elegant way to organize the data. There are alternative ways one could prepare your data (https://github.com/RandiLGarcia/2day-dyad-workshop/blob/master/Day%201/R%20Code/Day%201-Data%20Restructuring.Rmd), but it will depend on how you choose to run your analysis (described further later).
APIM using nlme
package.
Now that we know a bit more about the data we are working with and
have the data prepared in an usable format, we can set up our APIM
model. We’ll run this model in the nlme
package.
Specifically, we’ll examine whether:
- verbal ability in the first grade is predictive of verbal ability in
the sixth grade (verbal “actor” effect),
- performance ability in the first grade is predictive of performance
ability in the sixth grade (performance “actor” effect),
- verbal ability in the first grade is predictive of performance
ability in the sixth grade (verbal “partner” effect), and
- performance ability in the first grade is predictive of verbal ability in the sixth grade (performance “partner” effect).
Before running this full model, we will examine the empty model to determine how much variability there is within- and between-persons. Specifically,
\[Grade6Outcome_{i} = \beta_{0V}VerbOn_{it} + \beta_{0P}PerformOn_{it} + e_{Vi} +e_{Pi}\]
Empty model.
<- gls(# The outcome variable (grade6_outcome) is regressed onto
apim_empty # no intercept (-1) since we separately estimate intercepts
# for the two variables with dummy coded variables, specifically
# verb_on and perform_on
~ -1 +
grade6_outcome +
verb_on
perform_on,
# Select data set
data = data_melt,
# Set correlation structure, in this case,
# compound symmetry within each individual
correlation = corCompSymm(form=~1|id),
# Set the weights of the variances,
# allowing for differences between
# variables' error terms
weights = varIdent(form=~1|verb_on),
# Exclude rows with missing data
na.action = na.exclude)
# Examine the model summary
summary(apim_empty)
## Generalized least squares fit by REML
## Model: grade6_outcome ~ -1 + verb_on + perform_on
## Data: data_melt
## AIC BIC logLik
## 3063.861 3083.893 -1526.931
##
## Correlation Structure: Compound symmetry
## Formula: ~1 | id
## Parameter estimate(s):
## Rho
## 0.6106684
## Variance function:
## Structure: Different standard deviations per stratum
## Formula: ~1 | verb_on
## Parameter estimates:
## 1 0
## 1.000000 1.170165
##
## Coefficients:
## Value Std.Error t-value p-value
## verb_on 43.74990 0.7467026 58.5908 0
## perform_on 50.93162 0.8737656 58.2898 0
##
## Correlation:
## verb_n
## perform_on 0.611
##
## Standardized residuals:
## Min Q1 Med Q3 Max
## -3.25897805 -0.72994548 -0.02108776 0.72003021 3.05118458
##
## Residual standard error: 10.66505
## Degrees of freedom: 408 total; 406 residual
We examine the correlation of the verbal and performance error terms to determine the degree of non-independence in the data. We can see that Rho = 0.61, indicating the correlation across constructs, such that children who have higher verbal ability have higher performance ability.
Other things to note in this output…
The results for “verb_on” indicate the average or expected verbal score at grade 6 is 43.75.
The results for “perform_on” indicate the average or expected performance score at grade 6 is 50.93.
Both of these expected values correspond to their respective averages in the raw data.
The verbal and performance scores each have their own estimated error values. The estimated standard error for verbal scores is 10.67 and the estimated standard error for performance scores is 12.48 (1.17*10.67).
Next, we are going to run our full APIM model using the two-intercept approach. Specifically,
\[\begin{aligned} Grade6Outcome_{i} = &\beta_{0V}VerbOn_{it} + \beta_{1V}VerbOn_{it}*Verb1_{it} + \beta_{2V}VerbOn_{it}*Perform1_{it} \\ &+ \beta_{0P}PerformOn_{it} + \beta_{1P}PerformOn_{it}*Perform1_{it} \\ &+ \beta_{2P}PerformOn_{it}*Verb1_{it}+ e_{Vi} +e_{Pi} \end{aligned}\]
So when “verb_on” is equal to 0:
\[\begin{aligned} Grade6Outcome_{i} = &\beta_{0P}PerformOn_{it} + \beta_{1P}PerformOn_{it}*Perform1_{it} \\ &+\beta_{2P}PerformOn_{it}*Verb1_{it} + e_{Vi} +e_{Pi} \end{aligned}\]
and when “perform_on” is equal to 0:
\[\begin{aligned} Grade6Outcome_{it} = &\beta_{0V}VerbOn_{it} + \beta_{1V}VerbOn_{it}*Verb1_{it} + \beta_{2V}VerbOn_{it}*Perform1_{it}\\ &+ e_{Vi} +e_{Pi} \end{aligned}\]
Full model.
<- gls(# The outcome variable (grade6_outcome) is regressed onto
apim_full # no intercept (-1) since we separately estimate intercepts
# for the two variables with dummy coded variables, specifically
# verb_on and perform_on and
# actor and partner effects as indicated by
# the interaction terms of the variable name and dummy code
~ -1 +
grade6_outcome +
verb_on +
perform_on :verb_on + # verbal "actor" effect
verb1:perform_on + # performance "actor" effect
perfo1:perform_on + # verbal "partner" effect
verb1:verb_on, # performance "partner" effect
perfo1
# Select data set
data = data_melt,
# Set correlation structure, in this case,
# compound symmetry within each individual
correlation = corCompSymm(form=~1|id),
# Set the weights of the variances,
# allowing for differences between
# variables' error terms
weights = varIdent(form=~1|verb_on),
# Exclude rows with missing data
na.action = na.exclude)
# Examine the model summary
summary(apim_full)
## Generalized least squares fit by REML
## Model: grade6_outcome ~ -1 + verb_on + perform_on + verb1:verb_on + perfo1:perform_on + verb1:perform_on + perfo1:verb_on
## Data: data_melt
## AIC BIC logLik
## 2879.029 2914.997 -1430.514
##
## Correlation Structure: Compound symmetry
## Formula: ~1 | id
## Parameter estimate(s):
## Rho
## 0.3116371
## Variance function:
## Structure: Different standard deviations per stratum
## Formula: ~1 | verb_on
## Parameter estimates:
## 1 0
## 1.000000 1.188537
##
## Coefficients:
## Value Std.Error t-value p-value
## verb_on 19.869325 1.8634458 10.662679 0.0000
## perform_on 30.049124 2.2147737 13.567582 0.0000
## verb_on:verb1 0.809886 0.1150893 7.037017 0.0000
## perform_on:perfo1 0.962419 0.0951429 10.115507 0.0000
## perform_on:verb1 0.182846 0.1367879 1.336709 0.1821
## verb_on:perfo1 0.446064 0.0800504 5.572292 0.0000
##
## Correlation:
## verb_n prfrm_ vrb_n:v1 prfrm_n:p1 prfrm_n:v1
## perform_on 0.312
## verb_on:verb1 -0.738 -0.230
## perform_on:perfo1 -0.011 -0.034 -0.190
## perform_on:verb1 -0.230 -0.738 0.312 -0.610
## verb_on:perfo1 -0.034 -0.011 -0.610 0.312 -0.190
##
## Standardized residuals:
## Min Q1 Med Q3 Max
## -2.55877989 -0.66034374 -0.02441614 0.62804538 3.69258918
##
## Residual standard error: 7.545255
## Degrees of freedom: 408 total; 402 residual
Let’s interpret the results!
The expected verbal score at grade 6 = 19.87 and the expected performance score at grade 6 = 30.05 when verbal and performance scores at grade 1 are equal to zero.
Actor effects:
- The “actor effect” of verbal ability is 0.81, indicating that a
child’s verbal ability increases by 0.81 points for every additional
point in their grade 1 verbal ability score.
- The “actor effect” of performance ability is 0.96, indicating that a child’s performance ability increases by 0.96 points for every additional point in their grade 1 performance ability score.
- The “actor effect” of verbal ability is 0.81, indicating that a
child’s verbal ability increases by 0.81 points for every additional
point in their grade 1 verbal ability score.
Partner effects:
- The “partner effect” of performance ability on verbal ability is
0.45, indicating that a child’s verbal ability increases by 0.45 points
for every additional point in their grade 1 performance ability
score.
- The “partner effect” of verbal ability on performance ability is not significant, indicating that a child’s performance ability at grade 1 is not associated with their verbal ability at grade 6.
- The “partner effect” of performance ability on verbal ability is
0.45, indicating that a child’s verbal ability increases by 0.45 points
for every additional point in their grade 1 performance ability
score.
Other things to note:
- Rho = 0.31, indicates the correlation across constructs, such that children who have higher verbal ability have higher performance ability.
- The verbal and performance scores each have their own estimated error values. The estimated standard error for verbal scores is 7.55 and the estimated standard error for performance scores is 8.98 (1.19*7.55).
Other Resources.
We’ve walked through one way of running an APIM model in R - however, there are alternative ways of doing so. Here are a few resources if you’d like to learn more about running an APIM model or dyadic data analyses in general.
- David Kenny’s website, where he and his colleagues have created some useful shiny apps for running dyadic analyses in R: http://davidakenny.net/DyadR/DyadRweb.htm
- Randi Garcia’s github page: https://github.com/RandiLGarcia
Additional Information
We created this tutorial with a system environment and versions of R and packages that might be different from yours. If R reports errors when you attempt to run this tutorial, running the code chunk below and comparing your output and the tutorial posted on the LHAMA website may be helpful.
session_info(pkgs = c("attached"))
## ─ Session info ───────────────────────────────────────────────────────────────
## setting value
## version R version 4.2.0 (2022-04-22)
## os macOS Big Sur/Monterey 10.16
## system x86_64, darwin17.0
## ui X11
## language (EN)
## collate en_US.UTF-8
## ctype en_US.UTF-8
## tz America/New_York
## date 2022-08-20
## pandoc 2.18 @ /Applications/RStudio.app/Contents/MacOS/quarto/bin/tools/ (via rmarkdown)
##
## ─ Packages ───────────────────────────────────────────────────────────────────
## package * version date (UTC) lib source
## devtools * 2.4.3 2021-11-30 [1] CRAN (R 4.2.0)
## ggplot2 * 3.3.6 2022-05-03 [1] CRAN (R 4.2.0)
## nlme * 3.1-157 2022-03-25 [1] CRAN (R 4.2.0)
## psych * 2.2.5 2022-05-10 [1] CRAN (R 4.2.0)
## reshape * 0.8.9 2022-04-12 [1] CRAN (R 4.2.0)
## usethis * 2.1.6 2022-05-25 [1] CRAN (R 4.2.0)
##
## [1] /Library/Frameworks/R.framework/Versions/4.2/Resources/library
##
## ──────────────────────────────────────────────────────────────────────────────