# Matrix Math Operations Using R

Matrix math operations using R are easy to conduct using basic math operators such as `+`

`-`

`*`

and `/`

. Generally speaking the `matrix`

objects you perform math on must be the same dimensions (that is conformable) as each other but there are exceptions.

There are two kinds of matrix math:

- Math using a
`matrix`

simply as a series of values. - Matrix math using array properties.

In the first case you can use `R`

to carry out regular mathematical operations on a `matrix`

(that is a 2-dimensional array). In the second case you use `R`

to conduct “proper” matrix math.

This article is not intended to be a detailed treatise on matrix math; think of it more as a rough guide to possibilities. If `matrix`

math is what you need then this might help point you in the right direction.

Here are some of the topics covered:

## Simple math

Matrix math operations using R can be fairly simple. In the “simple” case you can use a `matrix`

with basic math operators.

### Identical dimensions

The “simplest” situation is where you have `matrix`

objects that have the same dimensions.

```
x <- matrix(c(1,1, 3,0, 1,0), ncol = 3))
y <- matrix(c(0,7, 0,5, 5,0), ncol = 3)
x ; y
[,1] [,2] [,3]
[1,] 1 3 1
[2,] 1 0 0
[,1] [,2] [,3]
[1,] 0 0 5
[2,] 7 5 0
```

The regular math operators with deal with the calculations:

```
x + y
[,1] [,2] [,3]
[1,] 1 3 6
[2,] 8 5 0
x * y
[,1] [,2] [,3]
[1,] 0 0 5
[2,] 7 0 0
x / y
[,1] [,2] [,3]
[1,] Inf Inf 0.2
[2,] 0.1428571 0 NaN
x - y
[,1] [,2] [,3]
[1,] 1 3 -4
[2,] -6 -5 0
```

In the preceding example 2 `matrix`

objects were used but there is no reason why you cannot have more, as long as they all have the same dimensions.

### Matrix and a single numeric

If you have a `matrix`

you can also use the basic operators with a single value:

```
x * 4.5
[,1] [,2] [,3]
[1,] 4.5 13.5 4.5
[2,] 4.5 0.0 0.0
y / 2
[,1] [,2] [,3]
[1,] 0.0 0.0 2.5
[2,] 3.5 2.5 0.0
```

### Matrix and multiple numeric

You can think of a `matrix`

as a `vector`

of values that happens to be split into rows and columns. This means that you can carry out math operations with a `matrix`

and a `vector`

of values, as long as the recursive process “fits” (i.e. is conformable).

In practical terms you can carry out a math operation between a `matrix`

and a `vector`

if the length of the `matrix`

is divisible by the length of the `vector`

exactly (i.e. producing an integer).

In the following example, the `matrix`

has 6 elements and therefore any `vector`

would have to contain 1, 2, or 3 elements. If your `vector`

is too long it is truncated and you get a warning.

```
x
[,1] [,2] [,3]
[1,] 1 3 1
[2,] 1 0 0
length(x)
[1] 6
x * 2:4
[,1] [,2] [,3]
[1,] 2 12 3
[2,] 3 0 0
# Long vectors are truncated with a warning
x + 1:4
[,1] [,2] [,3]
[1,] 2 6 2
[2,] 3 4 2
Warning message:
In x + 1:4 :
longer object length is not a multiple of shorter object length
```

It can help to see your `matrix`

as a `vector`

:

```
c(x)
[1] 1 1 3 0 1 0
c(x) * 2:4
[1] 2 3 12 0 3 0
c(x) + 1:4
[1] 2 3 6 4 2 2
Warning message:
In c(x) + 1:4 :
longer object length is not a multiple of shorter object length
```

Note that `c(x)`

was used in the example but `as.numeric(x)`

would also work.

### Other matrix math operations

Many math functions that can operate on a `vector`

will be able to work with a `matrix`

. For example:

```
xx
[,1] [,2] [,3]
[1,] 1 2 3
[2,] 4 5 6
log(xx)
[,1] [,2] [,3]
[1,] 0.000000 0.6931472 1.098612
[2,] 1.386294 1.6094379 1.791759
cos(xx)
[,1] [,2] [,3]
[1,] 0.5403023 -0.4161468 -0.9899925
[2,] -0.6536436 0.2836622 0.9601703
sqrt(xx)
[,1] [,2] [,3]
[1,] 1 1.414214 1.732051
[2,] 2 2.236068 2.449490
```

### Non-conformable matrices

When you have two `matrix`

objects of different dimensions you get an error. This example shows a single dimension `matrix`

:

```
yy <- matrix(c(40,100), ncol = 1)
yy
[,1]
[1,] 10
[2,] 100
x * yy
Error in x * yy : non-conformable arrays
```

In this case the `matrix`

can be converted to a `vector`

resulting in 2 elements. This is divisible by the number of elements in the first `matrix`

and permits matrix math:

```
x * c(yy)
[,1] [,2] [,3]
[1,] 10 30 10
[2,] 100 0 0
x + c(yy)
[,1] [,2] [,3]
[1,] 11 13 11
[2,] 101 100 100
```

There are other solutions for this case:

```
xx * c(yy)
sweep(xx, 1, yy, `*`)
apply(xx, 2, function(x) x * yy)
xx * yy[,1]
xx * yy[row(xx),]
```

These solutions work for any of the math operators `+`

`-`

`*`

and `/`

.

## Matrix Mathematics

Matrix math operations using R involve more than just simple math. In mathematics `matrix`

math is a substantial topic. Using `R`

you can carry out various matrix math computations. These include:

- Inner product
`%*%`

- Outer product
`outer()`

and`%o%`

- Kronecker product
`kronecker()`

and`%x%`

- Matrix diagonals
`diag()`

- Eigenvalues
`eigen()`

- Singular Value Decomposition
`svd()`

- Cholesky (Choleski) decomposition
`chol()`

- QR decomposition
`qr()`

- Determinant
`det()`

- Cross-product
`crossprod()`

and`tcrossprod()`

- Transposition
`t()`

- Upper/lower triangles
`upper.tri()`

and`lower.tri()`

- Row/column index
`row()`

and`col()`

### Inner product

Standard matrix multiplication (or inner product) is a way to multiply two `matrix`

objects. You use the `%*%`

operator in `R`

to execute the math.

```
X %*% Y
```

In general if the number of columns of the first `matrix`

equals the number of rows of the second `matrix`

then you should be able to use the matrix-math operator `%*%`

to multiply:

```
X <- matrix(c(2,7,4,8,5,9), ncol = 3)
Y <- matrix(c(1,3,4,6,1,2), ncol = 2)
X ; Y
[,1] [,2] [,3]
[1,] 2 4 5
[2,] 7 8 9
[,1] [,2]
[1,] 1 6
[2,] 3 1
[3,] 4 2
X %*% Y
[,1] [,2]
[1,] 34 26
[2,] 67 68
```

### Outer product

The outer product is a `matrix`

operation that takes a `matrix`

(or `vector`

or `array`

) and applies a function to it, with a second `matrix`

(or a `vector`

or `array`

) as an argument. There are two options in `R`

:

```
outer(X, Y, FUN = "*", ...)
X %o% Y
```

The `outer()`

command allows any maths function, with the default being `*`

. Essentially `X`

and `Y`

are the main arguments (you can pass additional parameters too). The `%o%`

operator (the letter o) is a convenience function for multiply (i.e. `FUN = "*"`

) so you end up specifying `X`

multiplied by `Y`

.

Use two vectors and the result is a `matrix`

with dimensions determined by the length of the vectors:

```
outer(1:3, 1:3)
[,1] [,2] [,3]
[1,] 1 2 3
[2,] 2 4 6
[3,] 3 6 9
```

You can use different function, and use `Y`

as a parameter passed to `FUN`

, in the following example `Y`

specifies the `base`

for the `log`

:

```
m
[,1] [,2]
[1,] 5 1
[2,] 1 3
outer(m, Y = 2, FUN = log)
, , 1
[,1] [,2]
[1,] 2.321928 0.000000
[2,] 0.000000 1.584963
```

Essentially what happens is that the second `matrix`

is converted to a `vector`

and this is “applied” to the first `matrix`

using the function specified. Each element of the second `matrix`

is applied in turn, and the result is an `array`

with multiple dimensions. In the following example the first `matrix`

is 2×2 and so is the second, so the result has dimensions of: `2, 2, 2, 2`

```
A
[,1] [,2]
[1,] 4 -1
[2,] 1 3
m
[,1] [,2]
[1,] 5 1
[2,] 1 3
outer(A, m, FUN = "+")
, , 1, 1
[,1] [,2]
[1,] 9 4
[2,] 6 8
, , 2, 1
[,1] [,2]
[1,] 5 0
[2,] 2 4
, , 1, 2
[,1] [,2]
[1,] 5 0
[2,] 2 4
, , 2, 2
[,1] [,2]
[1,] 7 2
[2,] 4 6
# Get identical result using a vector
outer(A, c(5,1,1,3), FUN = "+")
```

### Kronecker product

The Kronecker matrix product is similar to the outer product but the result is a single `array`

with dimensions equal to `dim(X)`

x `dim(Y)`

.

```
kronecker(X, Y, FUN = "*", ...)
X %x% Y
```

The `kronecker()`

command allows any function to be used, whist the alias `%x%`

is for multiply (i.e. `FUN = "*"`

).

```
m
[,1] [,2]
[1,] 5 1
[2,] 1 3
m %x% 2
[,1] [,2]
[1,] 10 2
[2,] 2 6
```

When your second array has >1 dimension, the result has dimensions equal to `dim(X)`

x `dim(Y)`

.

```
kronecker(m, 2:3, FUN = "+")
[,1] [,2]
[1,] 7 3
[2,] 8 4
[3,] 3 5
[4,] 4 6
xx
[,1] [,2] [,3]
[1,] 1 2 3
[2,] 4 5 6
yy
[,1]
[1,] 10
[2,] 100
kronecker(xx, yy)
[,1] [,2] [,3]
[1,] 10 20 30
[2,] 100 200 300
[3,] 40 50 60
[4,] 400 500 600
```

### Diagonals

A `matrix`

diagonal is more or less what it sounds like! Matrix diagonals are used in various branches of matrix math. In `R`

you can use the `diag()`

function to extract or create `matrix`

diagonals.

```
diag(x = 1, nrow, ncol, names = TRUE)
diag(x) <- value
```

There are 4 main uses for `diag()`

:

- To extract the diagonal from
`x`

(a`matrix`

). - To make an identity
`matrix`

by specifying`nrow`

. - To make an identity
`matrix`

by specifying`x`

as a scalar. - To make a
`matrix`

with specified diagonals.

You can also assign diagonal values to an existing `matrix`

.

#### Extract diagonals

To extract the diagonals from a `matrix`

you simply specify the `matrix`

:

```
C
[,1] [,2] [,3]
[1,] 3 1 -5
[2,] 2 0 4
[3,] 1 6 3
diag(C)
[1] 3 0 3
```

If the `matrix`

is not square you get the diagonals starting from the top-left:

```
E
[,1] [,2] [,3]
[1,] -3 2 0
[2,] 0 5 -1
diag(E)
[1] -3 5
```

#### Identity matrix from nrow

If you omit `x`

and specify `nrow`

you will return an identity matrix, that is one where the diagonals are `1`

and the other elements are `0`

. You may also give `ncol`

to create a non-square `matrix`

(note that `ncol`

may be larger or smaller than `nrow`

):

```
diag(nrow = 4)
[,1] [,2] [,3] [,4]
[1,] 1 0 0 0
[2,] 0 1 0 0
[3,] 0 0 1 0
[4,] 0 0 0 1
diag(nrow = 4, ncol = 3)
[,1] [,2] [,3]
[1,] 1 0 0
[2,] 0 1 0
[3,] 0 0 1
[4,] 0 0 0
diag(nrow = 4, ncol = 6)
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 1 0 0 0 0 0
[2,] 0 1 0 0 0 0
[3,] 0 0 1 0 0 0
[4,] 0 0 0 1 0 0
```

#### Identity matrix from scalar

If you specify `x`

as a scalar (single value) you return an identity `matrix`

with diagonals of `1`

and with dimensions (nrow, ncol) set to the of the scalar:

```
diag(4)
[,1] [,2] [,3] [,4]
[1,] 1 0 0 0
[2,] 0 1 0 0
[3,] 0 0 1 0
[4,] 0 0 0 1
```

#### Matrix from vector

If you specify `x`

as a `vector`

you will return a `matrix`

where the diagonals are the values in `x`

and the other elements are `0`

:

```
diag(x = c(3,4,2,6))
[,1] [,2] [,3] [,4]
[1,] 3 0 0 0
[2,] 0 4 0 0
[3,] 0 0 2 0
[4,] 0 0 0 6
```

#### Set diagonals for existing matrix

You can set the diagonals for an existing `matrix`

by assigning values:

```
mm
[,1] [,2]
[1,] 8 1
[2,] 1 9
diag(mm) <- 100
mm
[,1] [,2]
[1,] 100 1
[2,] 1 100
diag(mm) <- c(6,3)
mm
[,1] [,2]
[1,] 6 1
[2,] 1 3
CC
[,1] [,2] [,3]
[1,] 3 1 -5
[2,] 2 0 4
[3,] 1 6 3
diag(CC) <- c(99, 999)
Error in `diag<-`(`*tmp*`, value = c(99, 999)) :
replacement diagonal has wrong length
```

**Note**:

- If you specify a single value it is used for all diagonals.
- If you specify the “correct” number of elements you replace the existing values.
- If you specify too few or too many elements you get an error.

### Eigenvalues and eigenvectors

Spectral decomposition of a `matrix`

produces eigenvalues and eigenvectors. These values are used often in linear algebra and various branches of mathematics.

The `R`

function `eigen()`

can compute the eigenvalues and eigenvectors of a (square) `matrix`

.

```
eigen(x, symmetric, only.values = FALSE)
```

The `eigen()`

function only works on square `matrix`

objects.

The `symmetric`

parameter allows you to specify `TRUE`

to force the calculations to use only the lower triangle, when you have symmetrical `matrix`

objects. If you leave this to the default then the function checks to see if the values are symmetric.

The `only.values`

parameter can be set to `TRUE`

to return only the eigenvalues.

```
m
[,1] [,2]
[1,] 5 1
[2,] 1 3
eigen(m)
eigen() decomposition
$values
[1] 5.414214 2.585786
$vectors
[,1] [,2]
[1,] -0.9238795 0.3826834
[2,] -0.3826834 -0.9238795
```

If you have negative values you’ll return `complex`

values:

```
M
[,1] [,2]
[1,] 5 -2
[2,] 2 4
eigen(M)
eigen() decomposition
$values
[1] 4.5+1.936492i 4.5-1.936492i
$vectors
[,1] [,2]
[1,] 0.7071068+0.0000000i 0.7071068+0.0000000i
[2,] 0.1767767-0.6846532i 0.1767767+0.6846532i
```

### Singular Value Decomposition

Singular value decomposition of a `matrix`

is important in many statistical routines. SVD is a kind of factorisation of a `matrix`

and is carried out in `R`

with the `svd()`

function:

```
svd(x, nu = min(n, p), nv = min(n, p))
```

In the `svd()`

function:

`x`

is a`matrix`

`nu`

is the number of left singular vectors to compute`nv`

is the number of right singular vectors to compute

The `svd()`

function returns a result with several components:

`$d`

a`vector`

of singular values in descending order.`$u`

a`matrix`

of left singular vectors.`$v`

a`matrix`

of right singular vectors.

```
C
[,1] [,2] [,3]
[1,] 3 1 -5
[2,] 2 0 4
[3,] 1 6 3
svd(C)
$d
[1] 7.620874 5.810848 3.025941
$u
[,1] [,2] [,3]
[1,] -0.4710376 -0.7804341 -0.4111522
[2,] 0.4517527 0.1869132 -0.8723434
[3,] 0.7576563 -0.5966457 0.2645201
$v
[,1] [,2] [,3]
[1,] 0.03254861 -0.4412646 -0.8967866
[2,] 0.53470246 -0.7503738 0.3886290
[3,] 0.84441333 0.4921633 -0.2115217
```

You can use the `nu`

and `nv`

arguments to reduce (to `0`

if you like) the number of left and/or right singular vectors.

```
VADeaths
Rural Male Rural Female Urban Male Urban Female
50-54 11.7 8.7 15.4 8.4
55-59 18.1 11.7 24.3 13.6
60-64 26.9 20.3 37.0 19.3
65-69 41.0 30.9 54.6 35.1
70-74 66.0 54.3 71.1 50.0
svd(VADeaths, nu = 2, nv = 2)
$d
[1] 161.967972 10.037542 3.212076 1.194284
$u
[,1] [,2]
[1,] -0.1404491 -0.1476512
[2,] -0.2161352 -0.3623160
[3,] -0.3297674 -0.4373676
[4,] -0.5099065 -0.4819224
[5,] -0.7515374 0.6506817
$v
[,1] [,2]
[1,] -0.5243856 0.3123820
[2,] -0.4137210 0.6015805
[3,] -0.6229108 -0.7282861
[4,] -0.4072306 0.1005871
```

### Cholesky decomposition

Compute the Choleski (Cholesky) factorization of a real symmetric positive-definite square `matrix`

using the `chol()`

function. It is a kind of optimisation that can be used in various mathematical problems.

```
chol(x, pivot = FALSE, tol = -1, ...)
```

In `chol()`

:

`x`

is a`matrix`

, which should be positive and symmetric, (i.e. square).`pivot`

can be set to`TRUE`

in some cases where the`matrix`

is semi-positive definite.`tol`

is used to set a tolerance when`pivot = TRUE`

.

How to tell if a matrix is positive definite, or positive semi-definite?

- A
`matrix`

is**symmetric**if`nrow`

=`ncol`

, in other words, square. In addition the transposed`matrix`

must be the same as the original. - If all the eigenvalues are positive (>0) the
`matrix`

is**positive definite**. - If any eigenvalues are
`0`

and the others positive then the`matrix`

is**positive semi-definite**.

You can check the eigenvalues with `eigen(x)`

.

```
m
[,1] [,2]
[1,] 5 1
[2,] 1 3
# Check matrix is positive definite
eigen(m)$val
[1] 5.414214 2.585786
# Check that matrix is symmetric
identical(m, t(m))
[1] TRUE
chol(m)
[,1] [,2]
[1,] 2.236068 0.4472136
[2,] 0.000000 1.6733201
A
[,1] [,2]
[1,] 4 -1
[2,] 1 3
# Check matrix (it is complex but is +ve definite)
eigen(A)$val
[1] 3.5+0.866025i 3.5-0.866025i
# Check if matrix is symmetric
identical(A, t(A))
[1] FALSE
# chol() will return a result as it does not check symmetry!
```

If you have an eigenvalue of `0`

(with the others `>0`

) then your `matrix`

is positive semi-definite. You can perform Cholensky decomposition with `chol()`

but need to use `pivot = TRUE`

as an argument.

```
im <- matrix(c(1,-1, -1,1), ncol = 2)
im
[,1] [,2]
[1,] 1 -1
[2,] -1 1
# matrix is +ve semi-definite
eigen(im)$val
[1] 2 0
# Check symmetric
identical(im, t(im))
[1] TRUE
chol(im)
Error in chol.default(im) :
the leading minor of order 2 is not positive definite
# Need to use pivot = TRUE
chol(im, pivot = TRUE)
[,1] [,2]
[1,] 1 -1
[2,] 0 0
attr(,"pivot")
[1] 1 2
attr(,"rank")
[1] 1
Warning message:
In chol.default(im, pivot = TRUE) :
the matrix is either rank-deficient or indefinite
```

Note that you still get a warning!

The eigenvalues need to be `>0`

but there is a tolerance:

```
mat <- matrix(c(4,12,-16, 12,37,-43, -16,-43,98), ncol = 3)
mat
[,1] [,2] [,3]
[1,] 4 12 -16
[2,] 12 37 -43
[3,] -16 -43 98
# Eigenvalues all >0 (just)
eigen(mat)$val
[1] 123.47723179 15.50396323 0.01880498
# Check symmetry
identical(mat, t(mat))
[1] TRUE
chol(mat)
[,1] [,2] [,3]
[1,] 2 6 -8
[2,] 0 1 5
[3,] 0 0 3
a <- matrix(c(2,3,1,11, 3,9,3,30, 1,3,1,10, 11,30,10,101))
a
[,1] [,2] [,3] [,4]
[1,] 2 3 1 11
[2,] 3 9 3 30
[3,] 1 3 1 10
[4,] 11 30 10 101
# Very small eigenvalue registers as effectively 0
eigen(a)$val
[1] 1.120990e+02 9.009892e-01 1.681988e-14 1.831868e-15
# Check symmetry
identical(a, t(a))
[1] TRUE
chol(a)
Error in chol.default(a) :
the leading minor of order 3 is not positive definite
```

If you try to use `pivot = TRUE`

on a `matrix`

that is not positive (i.e. non-negative definite), you’ll get a result with a warning. However, the result will be meaningless!

```
# Negative definite i.e. not non-negative definite
b <- matrix(c(5,-5,-5,3), 2, 2)
b
[,1] [,2]
[1,] 5 -5
[2,] -5 3
# check symmetry -- okay
identical(b, t(b))
[1] TRUE
# negative eigenvalues, so no +ve
eigen(b)$val
[1] 9.09902 -1.09902
# pivot = TRUE runs but result is meaningless!
#.. not shown
```

### QR Decomposition

QR decomposition is used in many statistical methods, especially in regression. The `qr()`

function carries out the calculations.

```
qr(x, tol = 1e-07 , LAPACK = FALSE, ...)
```

In `qr()`

:

`x`

is a`matrix`

to be decomposed.`tol`

is a tolerance used for detecting linear dependencies (you probably won’t need to tinker with this).`LAPACK`

is which QR algorithm to use, the default uses`LINPACK`

.

The result of `qr()`

is an object with various components:

```
C
[,1] [,2] [,3]
[1,] 3 1 -5
[2,] 2 0 4
[3,] 1 6 3
qr(C)
$qr
[,1] [,2] [,3]
[1,] -3.7416574 -2.405351 1.069045
[2,] 0.5345225 5.586975 2.787095
[3,] 0.2672612 -0.983516 6.410089
$rank
[1] 3
$qraux
[1] 1.801784 1.180821 6.410089
$pivot
[1] 1 2 3
attr(,"class")
[1] "qr"
```

QR decomposition is a potentially large topic, which there is not space for here! In addition to the `qr()`

function there are various *helper* functions. See the help entry in `qr()`

for more information about those.

### Determinant

You can calculate the of a square `matrix`

using the `det()`

function. A square `matrix`

is one where `nrow`

= `ncol`

but it doesn’t have to be symmetrical (where `x = t(x)`

).

```
det(x)
determinant(x, logarithm = TRUE)
```

The `det()`

function returns the determinant. The `determinant()`

function returns the modulus of the determinant and the sign. The default for `determinant()`

is to return the `log`

of the determinant.

```
A
[,1] [,2]
[1,] 4 -1
[2,] 1 3
# Matrix is square but no matter that it is not symmetric
identical(A, t(A))
[1] FALSE
det(A)
[1] 13
determinant(A)
$modulus
[1] 2.564949
attr(,"logarithm")
[1] TRUE
$sign
[1] 1
attr(,"class")
[1] "det"
determinant(A, logarithm = FALSE)
$modulus
[1] 13
attr(,"logarithm")
[1] FALSE
$sign
[1] 1
attr(,"class")
[1] "det"
```

Note that the `logarithm`

argument **cannot** be abbreviated.

### Cross-products

Cross products are where you multiply two `matrix`

objects (inner product), but one is transposed. The `R`

functions `crossprod()`

and `tcrossprod()`

can carry out the calculations.

```
crossprod(x, y = NULL)
tcrossprod(x, y = NULL)
```

You specify at least one `matrix`

object `x`

. Object `y`

is optional, if not specified `x`

is used. These functions are essentially equivalent to:

`t(x) %*% y`

for`crossprod()`

`x %*% t(y)`

for`tcrossprod()`

```
b
[,1] [,2]
[1,] 5 -5
[2,] -5 3
crossprod(b)
[,1] [,2]
[1,] 50 -40
[2,] -40 34
B
[,1] [,2]
[1,] 1 3
[2,] -2 -4
[3,] 2 0
D
[,1]
[1,] 3
[2,] 2
[3,] -2
crossprod(B, D)
[,1]
[1,] -5
[2,] 1
```

You get an error if the dimensions are “wrong”:

```
A
[,1] [,2]
[1,] 4 -1
[2,] 1 3
D
[,1]
[1,] 3
[2,] 2
[3,] -2
crossprod(A, D)
Error in crossprod(A, D) : non-conformable arguments
```

If you specify a `vector`

it will be “converted” to a one-column or one-row `matrix`

:

```
tcrossprod(1:4)
[,1] [,2] [,3] [,4]
[1,] 1 2 3 4
[2,] 2 4 6 8
[3,] 3 6 9 12
[4,] 4 8 12 16
```

## Miscellaneous tools

There are various other tools that you can use with `matrix`

objects:

### Transposition

Perhaps the simplest `matrix`

operation is to transpose, that is switch the rows and columns. You can carry out `matrix`

transposition using the `t()`

function.

```
t(x)
```

Example: transpose matrix objects

```
x
[,1] [,2] [,3]
[1,] 1 3 1
[2,] 1 0 0
t(x)
[,1] [,2]
[1,] 1 1
[2,] 3 0
[3,] 1 0
D
[,1]
[1,] 3
[2,] 2
[3,] -2
t(D)
[,1] [,2] [,3]
[1,] 3 2 -2
```

### Upper and Lower triangles

You can “see” the upper or lower triangles of a `matrix`

(or any 2-D object) using `upper.tri()`

and `lower.tri()`

functions. These give you a `logical`

`matrix`

the same size as the original, i.e. `TRUE`

or `FALSE`

values.

```
upper.tri(x, diag = FALSE)
lower.tri(x, diag = FALSE)
```

You can optionally set `diag = TRUE`

to return the diagonals as `TRUE`

.

```
A
[,1] [,2]
[1,] 4 -1
[2,] 1 3
upper.tri(A)
[,1] [,2]
[1,] FALSE TRUE
[2,] FALSE FALSE
B
[,1] [,2]
[1,] 1 3
[2,] -2 -4
[3,] 2 0
lower.tri(B, diag = TRUE)
[,1] [,2]
[1,] TRUE FALSE
[2,] TRUE TRUE
[3,] TRUE TRUE
```

### Row and Column index

You can produce a `matrix`

of `integer`

values based on their position in a `matrix`

object (or similar 2-D object). The `row()`

and `col()`

functions operate over rows and columns respectively.

```
row(x, as.factor = FALSE)
.row(dim)
col(x, as.factor = FALSE)
.col(dim)
```

Essentially you provide a *template* object and the functions return a new object with the appropriate values.

```
C
[,1] [,2] [,3]
[1,] 3 1 -5
[2,] 2 0 4
[3,] 1 6 3
row(C)
[,1] [,2] [,3]
[1,] 1 1 1
[2,] 2 2 2
[3,] 3 3 3
```

If you specify the argument `as.factor`

you can get the result stored as a `factor`

rather than a regular `numeric`

.

```
E
[,1] [,2] [,3]
[1,] -3 2 0
[2,] 0 5 -1
col(E, as.factor = TRUE)
[,1] [,2] [,3]
[1,] 1 2 3
[2,] 1 2 3
Levels: 1 2 3
```

There are alternative versions `.row()`

and `.col()`

, which take a `vector`

argument, giving the dimensions of the `matrix`

for which the indices are required.

```
.row(dim = c(3, 5))
[,1] [,2] [,3] [,4] [,5]
[1,] 1 1 1 1 1
[2,] 2 2 2 2 2
[3,] 3 3 3 3 3
```

## Summary

The topic of `matrix`

math is extensive! This article is not intended to be an exhaustive thesis on the subject but more of an introduction to some of the `R`

tools that you can use to carry out `matrix`

math, and other operations.

This article is partly in support of my book *An Introduction to R* see the publications page for more information.

An Introduction to R will be published by Pelagic Publishing. See all my books at Pelagic Publishing.

For more articles visit the Tips and Tricks page and look for the various categories or use the search box.

See also the Knowledge Base where there are other topics related to R and data science.

Look at our other books on the Publications Page.

Visit our other site at GardenersOwn for a more ecological matters.