-
Notifications
You must be signed in to change notification settings - Fork 0
Matrix algebra
You can access matrix elements in the usual way:
scala> val a = 1\2\3 on 4\5\6 on 7\8\9 a: BIDMat.IMat = 1 2 3 4 5 6 7 8 9 scala> a(0,0) res0: Int = 1 scala> a(1,2) res1: Int = 6
Note that indices are zero-based, like Java, C and Scala. Assignment works as expected:
scala> a(1,1)= -1 res2: Int = -1 scala> a res3: BIDMat.IMat = 1 2 3 4 -1 6 7 8 9
Standard matrix operators are available (*, +, -). There are also element-wise multiplication and division operators, *@ (or unicode ∘) and /. All element wise operators work with one of three pairings of arguments:
- (a op b) where a and b have identical dimensions.
- (a op b) where one of a and b is a 1x1 matrix. Normally, a scalar will correctly cast as the 1x1 matrix. So a+1 adds 1 to every element of a.
- (a op b) where one of a and b is a column or row vector matching the corresponding dimension of the other argument. e.g. if a is nxm, b a 1xn row vector, then a *@ b returns the a matrix comprised of columns of a scaled by elements of b.
scala> val a = col(1,2,3)*row(0 to 3) a: BIDMat.FMat = 0 1 2 3 0 2 4 6 0 3 6 9 scala> val b = col(1,2,3) b: BIDMat.FMat = 1 2 3 scala> b*@a res16: BIDMat.FMat = 0 1 2 3 0 4 8 12 0 9 18 27
The comparison operators >, <, ==, <=, >=, != are available. They are applied element-wise, like the arithmetic operators +, -, *@ and /@. Right now, they return a result matrix of the same type as the argument matrices filled with 1 (true) or 0 (false). This is to support filtering the contents of a matrix of the original type by element-wise multiplication. This may change in future. There is currently no binary matrix type, so there is no other clearly better type choice than the argument matrix types.
scala> val a = rand(3,4); val b = rand(3,4) a: BIDMat.FMat = 0.98297 0.22743 0.0038280 0.91555 0.98098 0.96591 0.78442 0.41436 0.093067 0.60378 0.92436 0.94644 b: BIDMat.FMat = 0.97291 0.53571 0.81189 0.027426 0.90324 0.93507 0.92507 0.73449 0.75184 0.20606 0.10032 0.28513 scala> a > b res17: BIDMat.FMat = 1 0 0 1 1 1 0 0 0 1 1 1 scala> a > 0.5 res18: BIDMat.FMat = 1 0 0 1 1 1 1 0 0 1 1 1
There are two equation solving operators lsolve (unicode ◁) and rsolve (unicode ▷). These operators solve systems of equations on the left and right, respectively:
scala> a res8: BIDMat.FMat = 0.15675 0.43748 0.081511 0.46293 0.69485 0.91233 0.87120 0.12652 0.88596 0.58793 0.90858 0.45308 0.84817 0.080891 0.022294 0.73676 scala> c res9: BIDMat.FMat = 0.097704 0.71330 0.45136 0.14168 scala> val d = a ▷ c d: BIDMat.FMat = 0.56076 0.59981 -0.18204 -0.51360 scala> a * d res11: BIDMat.FMat = 0.097704 0.71330 0.45136 0.14168 scala> b res12: BIDMat.FMat = 0.67636 0.31279 0.10547 0.83065 scala> val e = b ◁ a e: BIDMat.FMat = 0.69000 -0.17383 0.20620 0.59693 scala> e * a res13: BIDMat.FMat = 0.67636 0.31279 0.10547 0.83065
Matrices are stored in column-major order (Fortran and Matlab style). They support single indices, which will be the position of the element in the internal data array. So:
scala> a res3: BIDMat.IMat = 1 2 3 4 -1 6 7 8 9 scala> a(5) res5: Int = 8 scala> a(3) = 0 res6: Int = 0 scala> a res8: BIDMat.IMat = 1 0 3 4 -1 6 7 8 9
BIDMat supports slicing of arrays by vectors of indices and by wildcards. So:
scala> a(0,?) res9: BIDMat.IMat = 1 0 3 scala> a(?,1) res10: BIDMat.IMat = 0 -1 8
You can use ranges for slicing:
scala> a(0 to 1, 0 to 1) res11: BIDMat.IMat = 1 0 4 -1
or integer matrices of particular elements:
scala> a(0\2, 0\2) res12: BIDMat.IMat = 1 3 7 9
Slices are assignable
scala> a(0 \ 2, 0 \ 2) = 2\2 on 4\4 scala> a res14: BIDMat.IMat = 2 0 2 4 -1 6 4 8 4
Slices support filling with constants
scala> a(?,1) = 0 scala> a res1: BIDMat.IMat = 2 0 2 4 0 6 4 0 4
Slice indices can be single, accessing the array in column-major order as though it were a single column:
scala> a(?) res2: BIDMat.IMat = 2 4 4 0 0 0 2 6 4 scala> a(?)=1 scala> a res4: BIDMat.IMat = 1 1 1 1 1 1 1 1 1
Finally, ranges can be used to fill slices within matrices, or even to create new matrices. In the latter case, you have to guide the compiler about the type of the result so that the implicit case from range to matrix can take effect. Like this:
scala> val r:IMat = 0->3 r: BIDMat.IMat = 0 1 2
Find functions return tuples of indices and possibly values on non-zero elements. e.g. find(A) returns the single indices of non-zero elements on A. Recall that single indices are of the form row + col * nrows for matrices in column-major order.
scala> b res5: BIDMat.DMat = 0 0.87985 0 0.78089 0 0.38906 0 0.13684 0.078547 0 0 0.63639 0 0 0 scala> find(b) res6: BIDMat.IMat = 1 5 7 11 12 13
The function find2(A) returns a two-tuple of the row and column indices of non-zeros of A:
scala> val binds = find2(b) binds: (BIDMat.IMat, BIDMat.IMat) = ( 1 0 2 1 2 3 , 0 1 1 2 2 2 ) scala> binds._1 \ binds._2 res9: BIDMat.IMat = 1 0 0 1 2 1 1 2 2 2 3 2
And find3(A) returns a three tuple with rowindex, colindex, and value:
scala> val binds = find3(b) binds: (BIDMat.IMat, BIDMat.IMat, BIDMat.FMat) = ( 1 0 2 1 2 3 , 0 1 1 2 2 2 , 0.78089 0.87985 0.13684 0.38906 0.078547 0.63639 ) scala> binds._1 \ binds._2 \ binds._3 res10: BIDMat.FMat = 1 0 0.78089 0 1 0.87985 2 1 0.13684 1 2 0.38906 2 2 0.078547 3 2 0.63639
Note that the index matrices returned by find, find2 and find3 are all IMat type. But the value array returned by find3 matches the type of the input matrix.
Generally these operators should behave as expected, but occasionally there will be an ambigious type cast causing an error. In those cases, you can always make indices and RHS types explicit by creating indices of type IMat and RHS matrices of type matching the LHS, i.e. DMat, FMat or IMat.
To facilitate time measurements and code optimization, BIDMat includes MATLAB-like timing functions tic and toc. They can be used to time calculations:
scala> val n = 5000; val a = rand(n,n); val b = rand(n,n) n: Int = 5000 a: BIDMat.FMat = 0.67636 0.37802 0.019282 0.37650 0.022408 0.76501 0.98461 0.26386 0.95407 0.53162 0.58297... 0.31279 0.20095 0.66822 0.23491 0.84426 0.10950 0.47546 0.43356 0.71042 0.22477 0.59530... 0.10547 0.75090 0.35638 0.32437 0.61692 0.49045 0.64625 0.74647 0.80904 0.86158 0.55967... 0.83065 0.16700 0.30008 0.27266 0.41159 0.20274 0.65723 0.73867 0.26418 0.18558 0.52546... 0.15675 0.21925 0.63147 0.88122 0.64219 0.92759 0.71214 0.31279 0.042941 0.45162 0.063413... 0.69485 0.92700 0.91147 0.84543 0.64784 0.093165 0.24176 0.56946 0.45874 0.028407 0.72586... 0.88596 0.66543 0.45513 0.021340 0.31171 0.68933 0.30451 0.87223 ... scala> tic; {val c = a*b}; val t1 = toc res4: Double = 2.581
Showing that the matrix multiply took about 2.6 seconds. The parenthesis around the matrix multiply simply convert the expression into a unit and suppress printing the contents of c.
There is a similar pair of functions flip and flop that compute both net flops and the time since the last flip. For convenience, there is also a gflop functions which returns gigaflops:
scala> flip; {val c = a*b}; val t1 = gflop t1: (Double, Double) = (96.86168151879117,2.581)
Showing that this operation netted 97 Gigaflops. Dense matrix multiply is one of the few operations that is not memory-bound. Most other operations are substantially slower. e.g. addition of matrices:
scala> flip; for (i<-0 until 100) {c~a+b}; val t1= flop t1: (Double, Double) = (0.5330490405117271,4.69)
and the for loop was used to repeat the operation often enough to get an accurate time measurement.
Each matrix operator generates a new matrix to hold its output. To save on storage, there is an alternative "assignment" operator ~ which saves the result into an existing matrix. e.g. If a, b, and c all have the same dimensions:
a ~ b + c
stores the result of the addition in a, without creating a new matrix. Since the expression (a + b) would normally produce a new matrix for output, the operator ~ is chosen to have higher precedence than arithmetic operators. So a ~ b + c associates as (a ~ b) + c. The pair (a ~ b) contains an operand b and a target matrix a, so when the + method is invoked on it, it is able to perform the addition and then save the result into a. For more complicated expressions, temporary variables can be used to save partial results, keeping in mind the precedence of ~. For example, the expression a*b + c*d requires temp storage for the results (a*b) and (c*d), and then another matrix to hold the final result. It can be implemented with
e ~ (t1~ a*b) + (t2~ c*d)
where t1 and t2 are temp matrices whose dimensions match the results of the matrix multiplies.
Caution: ~ is an operator that generates an operand/destination pair, but the operators applied to this pair do the saving. So
e ~ adoes not store the contents of a in e but
e ~ a+0 or e ~ a*1
will. To copy one matrix into another existing matrix, use:
e <-- a