Stream

trait Stream[+A]

Value Stream

Read about Stream in the Guide.

Stream has just one method to be implemented, but it has large attached libraries for:

Source
__.scala
class java.lang.Object
trait scala.Matchable
class Any
trait Stream.Preview[Stream.Preview.A]

Def

def readOpt: Opt[A]

Read next option

Read next option

Optionally returns next element or empty option

If empty option is returned, the stream is considered exhasted and should be discarded

This is the only real method of stream interface, the rest of functionality is provided with extension methods for:

Source
__.scala

Extension

@targetName("join")
inline def +[A](v: A): Stream[A]

Alias for join

Alias for join

Creates a new Stream with given element appended to current Stream

  ((1 <> 5).stream + 99 + 100).tp

  // Output
  Stream(1, 2, 3, 4, 5, 99, 100)
Inherited from
_extend
Source
_extend.scala
@targetName("joinAll")
inline def ++[A](v: Stream[A]): Stream[A]

Alias for joinAll

Alias for joinAll

Creates a new Stream with given elements appended to current Stream

  (('1' <> '9').stream ++ ('a' <> 'd') ++ ('A' <> 'D')).tp

  // Output
  Stream(1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, A, B, C, D)
Inherited from
_extend
Source
_extend.scala
@targetName("joinAllAt")
inline def ++@[A](index: Int, v: Stream[A]): Stream[A]

Alias for joinAllAt

Alias for joinAllAt

Creates a new Stream with given elements inserted into current Stream at given index

If index is out of range, the elements are prepended or appended

   (('a' <> 'f').stream ++@ (3, 'X' <> 'Z')).tp

   // Output
   Stream(a, b, c, X, Y, Z, d, e, f)
Inherited from
_extend
Source
_extend.scala
@targetName("joinAt")
inline def +@[A](index: Int, v: A): Stream[A]

Alias for joinAt

Alias for joinAt

Creates a new Stream with given element inserted into current Stream at given index

If index is out of range, the element is prepended or appended

 (('a' <> 'd').stream +@ (2, 'X')).tp

  // Output
  Stream(a, b, X, c, d)
Inherited from
_extend
Source
_extend.scala
inline def average[A](using v: Math.Average[A]): A

Average

Average

Computes average

For empty Stream returns zero value

   (10 <> 15).stream.map(_.toFloat).average  // Returns 12.5

Note: average is available for types providing given Math.Average implementations, which are by default Double, Float and opaque numerals based on Double and Float

Inherited from
_calculate
Source
_calculate.scala
inline def averageFew[A](fb: A => Opt[B], fc: A => Opt[C], fd: A => Opt[D], fe: A => Opt[E], ff: A => Opt[F])(using nb: Math.Average[B], nc: Math.Average[C], nd: Math.Average[D], ne: Math.Average[E], nf: Math.Average[F]): (B, C) | (B, C, D) | (B, C, D, E) | (B, C, D, E, F)

Multi average

Multi average

Simultaneously computes up to 5 average values for properties specified by functions

Returns tuple of appropriate size with values corresponding to the given mappings

For empty Stream returned tuple will hold zeros

   (1 <> 1000).stream.averageFew(_ * 10F, _ * 100F).tp  // Prints (5005, 50050)

    val (first, second, third) = (1 <> 1000).stream.averageFew(v => v.toDouble, _ * 10.0, _ * 100.0)

    first.tp     // Prints 500.5
    second.tp    // Prints 5005.0
    third.tp     // Prints 50050.0

Note: Averages areavailable for types providing given Stream.Custom.Average implementations, which are by default Double, Float and opaque numerals based on Double and Float

Inherited from
_calculate
Source
_calculate.scala
inline def averageOpt[A](using v: Math.Average[A]): Opt[A]

Average option

Average option

Computes average or returns void option for empty stream

   (10 <> 15).stream.map(_.toFloat).averageOpt  // Returns Opt(12.5)

Note: averageOpt is available for types providing given Math.Average implementations, which are by default Double, Float and opaque numerals based on Double and Float

Inherited from
_calculate
Source
_calculate.scala
inline def collect[A](f: scala.PartialFunction[A, B]): Stream[B]

Partial map

Partial map

Creates a new Stream by applying a partial function to all elements of current Stream on which the function is defined.

(0 <>> 26).stream.collect{
 case i if(i%2==0) => ('a' + i).toChar
}.tp

// Output
Stream(a, c, e, g, i, k, m, o, q, s, u, w, y)

Note:

  • collect is functionally similar to mapOpt, which is prefferable in most cases.
  • 'partialMap' would be a better name for this operation, but 'collect' is an established Scala convention.
Inherited from
_map
Source
_map.scala
inline def contains[A](value: A): Boolean

Value check

Value check

Returns true if stream contains given value.

Inherited from
_evaluate
Source
_evaluate.scala
inline def containsSequence[A](seq: Stream[A]): Boolean

Sequence check

Sequence check

Returns true if stream contains given sequence of values.

Inherited from
_evaluate
Source
_evaluate.scala
inline def count[A](f: A => Boolean): Int

Conditional count

Conditional count

Counts all stream elements, which satisfy given predicate

Inherited from
_evaluate
Source
_evaluate.scala
inline def count[A]: Int

All count

All count

Counts all stream elements

Inherited from
_evaluate
Source
_evaluate.scala
inline def countAndTime[A]: (Int, Time.Length)

Count and time

Count and time

Returns all elements count and Time.Length it took to pump the stream

  val (cnt,time) = (1 <> 1000).stream.peek(_ => J.sleep(1.Millis)).countAndTime

  ("" + cnt + " elements processed in " + time.tag).tp

  // Output
  1000 elements processed in 1.488880500 sec
Inherited from
_evaluate
Source
_evaluate.scala
inline def countFew[A](f1: A => Boolean, f2: A => Boolean, f3: A => Boolean, f4: A => Boolean, f5: A => Boolean): (Int, Int) | (Int, Int, Int) | (Int, Int, Int, Int) | (Int, Int, Int, Int, Int)

Multi count

Multi count

Simultaneously counts values for up to 5 different predicates

Returns tuple of appropriate size with values corresponding to the given mappings

For empty Stream returned tuple will hold zeros

val (large, odd, even) = (1 <>> 1000).stream.countFew(_ > 100, _ % 2 == 0, _ % 2 == 1)

large.tp    // Prints 899
odd.tp      // Prints 499
even.tp     // Prints 500
Inherited from
_evaluate
Source
_evaluate.scala
inline def default[A](v: => A): Stream[A]

Default element

Default element

If current Stream is empty, the given element will be appended

Otherwise current Stream will not change

 (1 <>> 1).stream.default(99).tp // Prints Stream(99)

 (1 <>> 5).stream.default(99).tp // Prints Stream(1, 2, 3, 4)
Inherited from
_extend
Source
_extend.scala
inline def docTree[A]: Doc.Tree

Doc Tree description

Doc Tree description

Returns a tree describing all stream trasformations

('a' <> 'z').stream
 .map(_.toInt)
 .take(_ % 2 == 0)
 .docTree.tp

// Output
scalqa.lang.int.g.Stream$TakeStream$2@4ds1{raw=Int}
 scalqa.lang.char.z.stream.map$Ints@j38c{raw=Int,fromRaw=Char,size=26}
   scalqa.lang.char.Z$Stream_fromRange@gw1k{raw=Char,size=26,from=a,step=1}
Inherited from
_metadata
Source
_metadata.scala
inline def drain[A]: Unit

Pump stream out

Pump stream out

Fetches and discards all stream elements

This operation can be usefull for side effects built into streaming pipeline

 ('A' <> 'C').stream.peek(_.tp).drain

 // Output
 A
 B
 C
Inherited from
_process
Source
_process.scala
inline def drop[A](f: A => Boolean): Stream[A]

Reverse filter

Reverse filter

Disallows Stream elements satisfying the given function

  (0 <>> 10).stream.drop(_ > 5).tp

  // Output
  Stream(0, 1, 2, 3, 4, 5)

Note: Scala equivalent is called "filterNot"

Inherited from
_drop
Source
_drop.scala
inline def DROP[A](f: A => Boolean): Stream[A]

Heavy reversed filter

Heavy reversed filter

Disallows Stream elements satisfying the given function

DROP is functionally equivalent to drop, but is fully inlined. It makes compiled code larger, but guarantees the best possible performance on large streams.

Inherited from
_drop
Source
_drop.scala
inline def dropDuplicates[A]: Stream[A]

Duplicates reversed filter

Duplicates reversed filter

Drops elements equal to the passed in prior position

Note: To generally get rid of all duplicates, the stream must be sorted to arrange duplicates in sequence

(1 <> 10).stream.repeat(3).dropDuplicates.tp // Prints Stream(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
Inherited from
_drop
Source
_drop.scala
inline def dropDuplicatesBy[A](f: A => B): Stream[A]

Mapped duplicates reversed filter

Mapped duplicates reversed filter

Drops elements, which evaluate to the same value as elements passed in prior position

Note: To generally get rid of all duplicates, the stream must be sorted by the mapping function

  (1 <> 100).stream.dropDuplicatesBy(_.toString.length).tp

  // Output
  Stream(1, 10, 100)
Inherited from
_drop
Source
_drop.scala
inline def dropEvery[A](nTh: Int): Stream[A]

Every Nth element reversed filter

Every Nth element reversed filter

Drops every nTh element

  (1 <> 10).stream.dropEvery(3).tp   // Prints: Stream(1, 2, 4, 5, 7, 8, 10)
Inherited from
_drop
Source
_drop.scala
inline def dropFirst[A](n: Int): Stream[A]

Head reversed filter

Head reversed filter

Drops given number of first elements

  (1 <> 10).stream.dropFirst(3).tp  // Prints  Stream(4, 5, 6, 7, 8, 9, 10)
Inherited from
_drop
Source
_drop.scala
inline def dropLast[A](n: Int): Stream[A]

Tail reversed filter

Tail reversed filter

Drops given number of elements coming last

  (1 <> 10).stream.dropLast(3).tp  // Prints  Stream(1, 2, 3, 4, 5, 6, 7)

Note: This method will block on unlimited streams

Inherited from
_drop
Source
_drop.scala
inline def dropOnly[A](v: A): Stream[A]

Single value reversed filter

Single value reversed filter

Drops only specified value.

  (1 <> 4).stream.dropOnly(3).tp

  // Output
  Stream(1, 2, 4)

Note: dropOnly is more efficient than general filter ".drop(_ == value)", because there is no function involved.

Inherited from
_drop
Source
_drop.scala
inline def dropRange[A](i: Int.Range): Stream[A]

Range reversed filter

Range reversed filter

Only allows elements outside specified sequencial range

  ('a' <> 'f').stream.dropRange(2 <> 3).tp

  // Output
  Stream(a, b, e, f)

Note: Range indexing starts from 0

Inherited from
_drop
Source
_drop.scala
inline def dropSequence[A](seq: Stream[A]): Stream[A]
Inherited from
_drop
Source
_drop.scala
inline def dropSequenceBy[A](f: A => B, seq: Stream[B]): Stream[A]
Inherited from
_drop
Source
_drop.scala
inline def dropValues[A](v: Stream[A]): Stream[A]

Multi value reversed filter

Multi value reversed filter

Drops only provided set of values

  (0 <>> 10).stream.dropValues(8,3,5).tp

  // Output
  Stream(0, 1, 2, 4, 6, 7, 9)

Note: dropValues is macro optimized when given value tuples sized from 2 to 5

Inherited from
_drop
Source
_drop.scala
inline def dropValuesBy[A](f: A => B, v: Stream[B]): Stream[A]

Mapped multi value reversed filter

Mapped multi value reversed filter

Drops only values, which convert to provided set of values

  (0 <>> 10).stream.dropValuesBy(_ % 5, (1,3) ).tp

  // Output
  Stream(0, 2, 4, 5, 7, 9)

Note: dropValuesBy is macro optimized when given value tuples sized from 2 to 5

Inherited from
_drop
Source
_drop.scala
inline def dropVoid[A](using d: Any.Def.Void[A]): Stream[A]

Void value reversed filter

Void value reversed filter

Drops elements which test to be void

Inherited from
_drop
Source
_drop.scala
inline def dropWhile[A](f: A => Boolean): Stream[A]

Coditional reversed head filter

Coditional reversed head filter

Discards first consecutive elements satisfying the condition

  def stream = (1 <> 5).stream ++ (1 <> 5)

  stream.tp                     // Prints Stream(1, 2, 3, 4, 5, 1, 2, 3, 4, 5)

  stream.dropWhile(_ <= 3).tp   // Prints Stream(4, 5, 1, 2, 3, 4, 5)

Note: Everything starting from the first non compliant element will be allowed (including later compliant elements)

Inherited from
_drop
Source
_drop.scala
inline def enablePreview[A]: Stream[A] & Stream.Preview[A]

Enables preview capabilities

Enables preview capabilities

Returns Stream.Preview, which allows to pre-load and inspect elements, even before they are read

  def strm : Stream[String] = ???

  if(strm.enablePreview.previewSize > 1000) "Stream is over 1K".TP
Inherited from
_mutate
Source
_mutate.scala
inline def enableSize[A]: Stream[A] & Able.Size

Adds sizing information

Adds sizing information

If Stream already has sizing, this method is a simple cast, otherwise, the elements might be buffered and counted.

Inherited from
_mutate
Source
_mutate.scala
inline def equalsSequence[A](v: Stream[A]): Boolean

Equal check

Equal check

Iterates both streams and compares all corresponding elements

Returns true if all are equal, `false`` otherwise

Inherited from
_evaluate
Source
_evaluate.scala
inline def equalsSequenceResult[A](v: Stream[A]): Result[true]

Equal check

Equal check

Iterates both streams and compares all corresponding elements

When first not equal pair is found, the problem result is returned

If all elements are equal, Result[true] is returned

(0 <> 10).stream.equalsAllResult(0 <> 10).tp
// Prints: Result(true)

(0 <> 10).stream.equalsAllResult(0 <>> 10).tp
// Prints: Result(Problem(Second stream has less elements))

((0 <> 5).stream + 7 + 8).equalsAllResult(0 <> 10).tp
// Prints: Result(Problem(Fail at index 6: 7 != 6))

Note: The returned problem contains message with basic description

Inherited from
_evaluate
Source
_evaluate.scala
inline def exists[A](f: A => Boolean): Boolean

Exists check

Exists check

Returns true if there is an elemnet satisfying given predicate

Inherited from
_evaluate
Source
_evaluate.scala
inline def FILTER[A](f: A => Boolean): Stream[A]

Legacy heavy filter

Legacy heavy filter

Filters Stream elements according to given function

FILTER is functionally equivalent to filter, but is fully inlined. It makes compiled code larger, but guarantees the best possible performance on large streams.

Note: TAKE is usually used instead.

Inherited from
_Filter
Source
__.scala
inline def filter[A](f: A => Boolean): Stream[A]

Legacy filter

Legacy filter

Filters Stream elements according to given function

  (0 <>> 10).stream.filter(_ > 5).tp

  // Output
  Stream(6, 7, 8, 9)

Note: take is usually used instead.

Inherited from
_Filter
Source
__.scala
inline def find[A](f: A => Boolean): A

Find value

Find value

Finds the first value accepted by given predicate

 (1 <> 1000).stream.find(_ > 100).tp  // Prints 101

Note: If value is not found find fails, use findOpt in most cases

Inherited from
_evaluate
Source
_evaluate.scala
inline def findOpt[A](f: A => Boolean): Opt[A]

Optional find value

Optional find value

Finds the first value accepted by given predicate or returns void option if not found

(1 <> 1000).stream.findOpt(_ > 100).tp   // Prints Opt(101)

(1 <> 10).stream.findOpt(_ > 100).tp     // Prints Opt(VOID)
Inherited from
_evaluate
Source
_evaluate.scala
inline def findPositionOpt[A](f: A => Boolean): Int.Opt

Find index

Find index

Optionally returns index for the first element satisfying the predicate or Int.Opt(VOID) if none found

  (50 <> 500).stream.findPositionOpt(_ == 400)  // Retuns Int.Opt(350)
Inherited from
_evaluate
Source
_evaluate.scala
inline def findSequencePositionOpt[A](v: Stream[A]): Int.Opt

Find start index

Find start index

Optionally returns index where given stream value sequence matches current stream values

Inherited from
_evaluate
Source
_evaluate.scala
inline def FLAT_MAP[A](f: A => Stream[B])(using s: Specialized[B]): s.Stream

Heavy flat map

Heavy flat map

FLAT_MAP is functionally equivalent to flatMap, but is fully inlined. It makes compiled code larger, but guarantees the best possible performance on large streams.

Inherited from
_map
Source
_map.scala
inline def flatMap[A](f: A => Stream[B])(using s: Specialized[B]): s.Stream

Flat map

Flat map

Creates a new Stream by applying given function to all elements of current Stream and concatenating the results

(1 <> 3).stream.flatMap(i => Stream(i, i*10, i*100)).tp

// Output
Stream(1, 10, 100, 2, 20, 200, 3, 30, 300)
Inherited from
_map
Source
_map.scala
inline def flatten[A](using d: Any.Def.ToStream[A, B], s: Specialized[B]): s.Stream

Converts a stream of streams into a flat stream

Converts a stream of streams into a flat stream

The operation will only compile if stream elements are streams or stream convertible entities, like Able.Stream, Iterable, Iterator, etc.

val vs: Stream[Stream[Char]] = Stream(
  'a' <> 'd',
  Pack('x', 'y', 'z'),
  Vector('v', 'e', 'c', 't', 'o', 'r'))

vs.flatten.tp // Prints Stream(a, b, c, d, x, y, z, v, e, c, t, o, r)
Inherited from
_map
Source
_map.scala
inline def FOLD[A](start: A)(f: (A, A) => A): A

Heavy Fold

Heavy Fold

Folds elements with a binary function

FOLD is functionally equivalent to fold, but is fully inlined. It makes compiled code larger, but guarantees the best possible performance on large streams.

Inherited from
_aggregate
Source
_aggregate.scala
inline def fold[A](start: A)(f: (A, A) => A): A

Fold

Fold

Folds elements with a binary function

    // Calculate sum of first 1000 Ints

    (1 <> 1000).stream.fold(0)(_ + _) // Returns 500500
Value Params
f

binary function to fold elements with

start

seed value to start with

Inherited from
_aggregate
Source
_aggregate.scala
inline def FOLD_AS[A](start: B)(f: (B, A) => B): B

Heavy Fold and convert

Heavy Fold and convert

Folds and converts elements with a binary function

FOLD_AS is functionally equivalent to foldAs, but is fully inlined. It makes compiled code larger, but guarantees the best possible performance on large streams.

Inherited from
_aggregate
Source
_aggregate.scala
inline def foldAs[A](start: B)(f: (B, A) => B): B

Fold and convert

Fold and convert

Folds and converts elements with a binary function

    // Calculate sum of first 1000 Ints

    (1 <> 1000).stream.foldAs(0L)(_ + _) // Returns 500500
Value Params
f

binary function to fold elements with Note. When folding AnyRef stream as a primitive value, there will be value boxing. Use FOLD_AS instead, which will be perfectly specialized.

start

seed value to start with

Inherited from
_aggregate
Source
_aggregate.scala
inline def FOREACH[A](f: A => U): Unit

Heavy process stream

Heavy process stream

Applies given function to each stream element

FOREACH is functionally equivalent to foreach, but is fully inlined. It makes compiled code larger, but guarantees the best possible performance on large streams.

Inherited from
_process
Source
_process.scala
inline def foreach[A](f: A => U): Unit

Process stream

Process stream

Applies given function to each stream element

 ('A' <> 'C').stream.foreach(_.tp)

 // Output
 A
 B
 C
Inherited from
_process
Source
_process.scala
inline def foreachIndexed[A](f: (Int, A) => U, start: Int): Unit

For each indexed

For each indexed

Calls given function with counter

 ('A' <> 'C').stream.foreachIndexed((i,v) => "Element " + i + " = " + v tp(), 1)

 // Output
 Element 1 = A
 Element 2 = B
 Element 3 = C
Value Params
start

starting value for indexing

Inherited from
_process
Source
_process.scala
inline def fornil[A](f: => U): Unit

Run for nonexistent value

Run for nonexistent value

Runs given function only if stream is empty.

This operation is rarely useful and is provided for consistency.

Use peekEmpty instead, it can be combined with other processing

Inherited from
_process
Source
_process.scala
inline def group[A](f: (A, A) => Boolean, peek: (A, Boolean) => U): Stream[Stream[A]]

Group by test

Group by test

Puts elements in the same group based on a function test for every two consecutive elements

   // Putting Ints into groups of 3

   (0 <> 20).stream.group(_ / 3 == _ / 3).print

   // Output
   ---------------
   ?
   ---------------
   Stream(0, 1, 2)
   Stream(3, 4, 5)
   Stream(6, 7, 8)
   Stream(9, 10, 11)
   Stream(12, 13, 14)
   Stream(15, 16, 17)
   Stream(18, 19, 20)
   ---------------
Value Params
f

function for two consecutive elements. if 'false' is returned, the second tested element will start a new group

peek

side-effect convenience function will run for each element. Boolean parameter indicates if the element starts a new group

Inherited from
_group
Source
_group.scala
inline def group[A]: Stream[Stream[A]]

Simple grouping

Simple grouping

Puts consecutive elements in the same group if they are equal

   def stream =  Stream(1, 2, 3).repeat(3)

   stream.tp           // Prints Stream(1, 1, 1, 2, 2, 2, 3, 3, 3)

   stream.group.print  // Prints  ------------
                                ?
                                ------------
                                Stream(1, 1, 1)
                                Stream(2, 2, 2)
                                Stream(3, 3, 3)
                                ------------

Note: Non consecutive equal elements will end up in different groups. Prior ordering might be needed

Inherited from
_group
Source
_group.scala
inline def groupBy[A](f: A => Any, more: A => Any*): Stream[Stream[A]]

Grouping on properties

Grouping on properties

Puts consecutive elements in the same group if all the specified properties are equal

When properties change, a new group is started

    ('#' <> '|').stream.groupBy(_.isLetter, _.isDigit).print

   // Output
   ---------------------------------------------------------------------------------
   ?
   ---------------------------------------------------------------------------------
   Stream(#, $, %, &, ', (, ), *, +, ,, -, ., /)
   Stream(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
   Stream(:, ;, <, =, >, ?, @)
   Stream(A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, B, U, V, W, X, Y, Z)
   Stream([, \, ], /\, _, `)
   Stream(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, p, t, u, v, w, x, y, z)
   Stream({, |)
   ---------------------------------------------------------------------------------
Value Params
properties

a set of functions, each indicating element property

Inherited from
_group
Source
_group.scala
inline def groupEvery[A](cnt: Int): Stream[Stream[A]]

Fixed size groups

Fixed size groups

Puts consecutive elements into fixed size groups

('a' <> 'z').stream.groupEvery(8).print

// Output
-------------------------
?
-------------------------
Stream(a, b, c, d, e, f, g, h)
Stream(i, j, k, l, m, n, o, p)
Stream(q, r, p, t, u, v, w, x)
Stream(y, z)
-------------------------
Inherited from
_group
Source
_group.scala
inline def groupWith[A](f: A => B): Stream[(B, Stream[A])]

Grouping on a property

Grouping on a property

Puts consecutive elements in the same group if their properties are equal

  (0 <> 20).stream.groupWith(_ / 3).print

  // Output
  -- -------------
  _1 _2
  -- -------------
  0  Stream(0, 1, 2)
  1  Stream(3, 4, 5)
  2  Stream(6, 7, 8)
  3  Stream(9, 10, 11)
  4  Stream(12, 13, 14)
  5  Stream(15, 16, 17)
  6  Stream(18, 19, 20)
  -- -------------

Note: groupWith also returns the groupped property value (unlike groupBy)

Value Params
properties

a set of functions, each indicating an element property

Inherited from
_group
Source
_group.scala
inline def hideSizeData[A]: Stream[A]

Loose size information

Loose size information

Many streams return ''sizeLongOpt'', knowing their current size

hideSizeData drops sizing information, so some optimizations will not be available

This is primarily for testing and debugging

Inherited from
_mutate
Source
_mutate.scala
inline def isEvery[A](f: A => Boolean): Boolean

Forall check

Forall check

Returns true if every single element satisfies the given predicate

Inherited from
_evaluate
Source
_evaluate.scala
inline def iterator[A]: scala.collection.Iterator[A]

Iterator view

Iterator view

Wraps current stream as scala.collection.Iterator

Inherited from
_toScala
Source
_toScala.scala
inline def join[A](v: A): Stream[A]

Join element

Join element

Creates a new Stream with given element appended to current Stream

  (1 <> 5).stream.join(99).join(100).tp

  // Output
  Stream(1, 2, 3, 4, 5, 99, 100)
Inherited from
_extend
Source
_extend.scala
inline def joinAll[A](v: Stream[A]): Stream[A]

Join all

Join all

Creates a new Stream with given elements appended to current Stream

  ('1' <> '9').stream.joinAll('a' <> 'd').joinAll('A' <> 'D').tp

  // Output
  Stream(1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, A, B, C, D)
Inherited from
_extend
Source
_extend.scala
inline def joinAllAt[A](index: Int, v: Stream[A]): Stream[A]

Join all at position

Join all at position

Creates a new Stream with given elements inserted into current Stream at given index

If index is out of range, the elements are prepended or appended

   ('a' <> 'f').stream.joinAllAt(3, 'X' <> 'Z').tp

   // Output
   Stream(a, b, c, X, Y, Z, d, e, f)
Inherited from
_extend
Source
_extend.scala
inline def joinAt[A](index: Int, v: A): Stream[A]

Join element at position

Join element at position

Creates a new Stream with given element inserted into current Stream at given index

If index is out of range, the element is prepended or appended

 ('a' <> 'd').stream.joinAt(2, 'X').tp

  // Output
  Stream(a, b, X, c, d)
Inherited from
_extend
Source
_extend.scala
inline def last[A]: A

Last element

Last element

Returns the last stream element

Fails if empty

Inherited from
_evaluate
Source
_evaluate.scala
inline def lastOpt[A]: Opt[A]

Last element

Last element

Optionally returns the last element or Opt(VOID)

Inherited from
_evaluate
Source
_evaluate.scala
inline def load[A]: Stream[A] & Able.Size

Preload all

Preload all

Immediately loads all stream elements into memory, so they are no longer dependent on underlying sources.

  def s : Stream[String] = ???

  s.load

  // is functionally same as

  s.toBuffer.stream
Inherited from
_mutate
Source
_mutate.scala
inline def makeString[A](separator: String)(using t: Any.Def.Tag[A]): String

Convert to String

Convert to String

The result is a concatenation of all elements with given separator

   ('a' <> 'j').stream.makeString("")            // Returns abcdefghij

   ('a' <> 'j').stream.makeString("|")           // Returns a|b|c|d|e|f|g|h|i|j

Inherited from
_toString
Source
_toString.scala
inline def map[A](f: A => B)(using s: Specialized[B]): s.Stream

Simple map

Simple map

Creates a new Stream where each element is a result of applying given function to current Stream elements

(0 <>> 26).stream.map(i => ('a' + i).toChar).tp

// Output
Stream(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z)
Inherited from
_map
Source
_map.scala
inline def MAP[A](f: A => B)(using s: Specialized[B]): s.Stream

Heavy map

Heavy map

MAP is functionally equivalent to map, but is fully inlined. It makes compiled code larger, but guarantees the best possible performance on large streams.

Inherited from
_map
Source
_map.scala
inline def MAP_OPT[A](f: A => OPT)(using o: Specialized.Opt[B, OPT], s: Specialized[B]): s.Stream

Heavy optional map

Heavy optional map

MAP_OPT is functionally equivalent to mapOpt, but is fully inlined. It makes compiled code larger, but guarantees the best possible performance on large streams.

Inherited from
_map
Source
_map.scala
inline def mapIf[A](condition: A => Boolean, fun: A => A): Stream[A]

Conditional map

Conditional map

This is a synthetic oeration which is inlined as:

map(v => if(condition(v)) fun(v) else v)

In some cicumstances using "mapIf" does not make sense, in some it is really usefull.

Inherited from
_map
Source
_map.scala
inline def mapOpt[A](f: A => OPT)(using s: Specialized[B]): s.Stream

Optional map

Optional map

Creates a new Stream where each element is a result of applying given function to Stream elements. If the function returns void option, the element is dropped.

(1 <> 10).stream.mapOpt(i => if(i % 2 == 0) "Even_"+i else VOID).tp

// Output
Stream(Even_2, Even_4, Even_6, Even_8, Even_10)

Pattern matching can be used, but the last void case must always be provided explicitly:

(0 <>> 26).stream.mapOpt{
 case i if(i % 2 == 0) => ('a' + i).toChar
 case _                => VOID
}.tp

// Output
Stream(a, c, e, g, i, k, m, o, q, s, u, w, y)

Note:

  • All cases must return the same type, otherwise the operation will not compile.
  • mapOpt is functionally similar to collect, but is faster (PartialFunction in collect has to be evaluated twice)
Inherited from
_map
Source
_map.scala
inline def max[A](using o: Ordering[A]): A

Maximum

Maximum

Computes maximum value

Fails for empty streams

Inherited from
_calculate
Source
_calculate.scala
inline def maxBy[A](f: A => B)(using o: Ordering[B]): A

Maximum by property

Maximum by property

Computes maximum value based on given function

Fails for empty streams

Inherited from
_calculate
Source
_calculate.scala
inline def maxByOpt[A](f: A => B)(using o: Ordering[B]): Opt[A]

Optional maximum by property

Optional maximum by property

Computes maximum value based on given function or returns void option for empty streams

Inherited from
_calculate
Source
_calculate.scala
inline def maxOpt[A](using o: Ordering[A]): Opt[A]

Optional maximum

Optional maximum

Computes maximum value or returns void option for empty streams

Inherited from
_calculate
Source
_calculate.scala
inline def min[A](using o: Ordering[A]): A

Minimum

Minimum

Computes minimum value

Fails for empty streams

Inherited from
_calculate
Source
_calculate.scala
inline def minBy[A](f: A => B)(using o: Ordering[B]): A

Minimum by property

Minimum by property

Computes minimum value based on given function

Fails for empty streams

Inherited from
_calculate
Source
_calculate.scala
inline def minByOpt[A](f: A => B)(using o: Ordering[B]): Opt[A]

Optional minimum by property

Optional minimum by property

Computes minimum value based on given function or returns void option for empty streams

Inherited from
_calculate
Source
_calculate.scala
inline def minOpt[A](using o: Ordering[A]): Opt[A]

Optional minimum

Optional minimum

Computes minimum value or returns void option for empty streams

Inherited from
_calculate
Source
_calculate.scala
@targetName("nonEmptyOpt")
inline def nonEmptyOpt[A]: Opt[Stream[A]]
Inherited from
_mutate
Source
_mutate.scala
inline def pack[A](using s: Specialized[A]): s.Pack

Pack elements

Pack elements

Returns stream elements as Pack

Inherited from
_toCollections
Source
_toCollections.scala
def parallel[A]: Stream.Flow[A]

Parallel

Parallel

Returns Stream.Flow with parallel execution

Each consecutive element will be sent to a new thread for processing

  (1 <> 5).stream
     .parallel
     .map("Value: " + _ + "\t" + Thread.currentThread.getName)
     .foreach(println)

  // Possible Output
  Value: 1    ForkJoinPool.commonPool-worker-9
  Value: 3    ForkJoinPool.commonPool-worker-11
  Value: 2    main
  Value: 4    ForkJoinPool.commonPool-worker-2
  Value: 5    ForkJoinPool.commonPool-worker-4
Inherited from
_parallel
Source
_parallel.scala
def parallelIf[A](v: Boolean): Stream.Flow[A]

Conditionally parallel

Conditionally parallel

Returns Stream.Flow with parallel or sequential implementation, depending on given parameter

   (1 <> 50).stream.parallelIf(true).isParallel   // Returns true

   (1 <> 50).stream.parallelIf(false).isParallel  // Returns false
Inherited from
_parallel
Source
_parallel.scala
def parallelIfOver[A](threshold: Int): Stream.Flow[A]

Conditionally parallel

Conditionally parallel

Returns Stream.Flow with parallel or sequential implementation, depending on stream having element count equal or greater than given ''threshold''

  (1 <> 50).stream.parallelIfOver(100).isParallel   // Returns false

  (1 <> 200).stream.parallelIfOver(100).isParallel  // Returns true
Inherited from
_parallel
Source
_parallel.scala
def parallelWithPriority[A](p: J.Priority, parallelism: Int): Stream.Flow[A]

Parallel with Priority

Parallel with Priority

This is very expensive operation, because it creates a custom thread pool. It only sutable for long running streams

   (1 <> 100).stream.parallelWithPriority(MIN, 4).foreach(v => ())

   (1 <> 100).stream.parallelWithPriority(MAX).foreach(v => ())

   (1 <> 100).stream.parallelWithPriority(J.Priority(5), 4).foreach(v => ())

Note: parallelism determines how many parallel threads are allowed. Default value is CPU core count minus 1

Inherited from
_parallel
Source
_parallel.scala
inline def partition[A](p: A => Boolean, more: A => Boolean*): Stream[Stream[A]]

Predicate grouping

Predicate grouping

All stream elements are grouped by given predicates, which are applied in sequence. Thus if an element is accepted into a group, it will not be evaluated by the rest of the filters.

The resulting stream size will be equal to the number of predicates plus one. The last group will hold spill over elements, not accepted by any predicate. Groups can be empty.

val (odd,even) = (1 <> 10).stream.partition(_ % 2 == 1).tuple2

odd.tp
even.tp

// Output
Stream(1, 3, 5, 7, 9)
Stream(2, 4, 6, 8, 10)


// Age groups
(1 <> 80).stream.partition(_ <= 12, _ in 13 <> 19, _ < 30, _ in 30 <> 40, _ < 50, _ < 65).print

-------------------------------------------------------------------
?
-------------------------------------------------------------------
Stream(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
Stream(13, 14, 15, 16, 17, 18, 19)
Stream(20, 21, 22, 23, 24, 25, 26, 27, 28, 29)
Stream(30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40)
Stream(41, 42, 43, 44, 45, 46, 47, 48, 49)
Stream(50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64)
Stream(65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80)
-------------------------------------------------------------------
Inherited from
_group
Source
_group.scala
inline def peek[A](f: A => U): Stream[A]

Inspect

Inspect

The given function will be run with every passing stream element.

  (1 <> 5).stream.peek(_.tp).drain

  // Output
  1
  2
  3
  4
  5
Inherited from
_peek
Source
_peek.scala
inline def peekEmpty[A](f: => U): Stream[A]

Peek empty

Peek empty

The given function is executed once, only if stream is empty

  (1 <> 10).stream.drop(_ > 0).peekEmpty("Stream is empty".tp).drain

  // Output
  Stream is empty
Inherited from
_peek
Source
_peek.scala
inline def peekEnd[A](f: (Int, Time.Length) => U): Stream[A]

Peek end

Peek end

The given function is executed once, when stream is exhausted

The function receives total element count and Time.Length, it took for all elements to pass

  (1 <> 10).stream
    .peek(_ => J.sleep(100.Millis))
    .peekEnd((cnt,time) => "Elements: "+cnt+"  total time: "+time.tag tp())
    .drain

  // Output
  Elements: 10  total time: 0.904106700 sec

Note: This will not run for empty streams

Inherited from
_peek
Source
_peek.scala
inline def peekEvents[A](f: Stream.Custom.Event => U): Stream[A]

Custom events

Custom events

Allows to setup Stream.Custom.Events multiple monitoring events

 (1 <> 1000).stream
   .peek(_ => J.sleep(5.Millis))
   .peekEvents(e => {
     e.onBeforeFirst(t   => "Started at: "+ t.dayTime.tag tp())
     e.onEvery(1.Second, (c,t) => "  Processed "+c+" in "+t.tag tp())
     e.onAfterLast((c,t) => "Finished in: "+ t.tag + ",  Element count: " + c tp())
   })
   .drain

 // Output

 Started at: 14:05:39.333
   Processed 187 in 1.018583400 sec
   Processed 371 in 2.020508100 secs
   Processed 557 in 3.021843300 secs
   Processed 743 in 4.023837400 secs
   Processed 928 in 5.026982 secs
 Finished in: 5.411673300 secs, Element count: 1000
Inherited from
_peek
Source
_peek.scala
inline def peekIndexed[A](f: (Int, A) => U, start: Int): Stream[A]

Indexed peek

Indexed peek

The given function will be executed with every passing element and its index.

  ('a' <> 'f').stream.peekIndexed((i,c) => (""+i+" : "+c).tp, 1).drain

  // Output
  1 : a
  2 : b
  3 : c
  4 : d
  5 : e
  6 : f

Note. By default indexing starts with 0, but it can be specified

Inherited from
_peek
Source
_peek.scala

Custom monitor

Custom monitor

Adds pre-build Stream.Custom.Event.Monitor

If passed monitor tests to be void (.isEmpty), the operation is ignored

Inherited from
_peek
Source
_peek.scala
inline def peekStart[A](f: Time => U): Stream[A]

Peek start

Peek start

The given function is executed once, just before the first elements is about to pass.

  ('a' <> 'f').stream.peekStart(time => "Started at: "+time).drain

Note: This will not run for empty streams

Inherited from
_peek
Source
_peek.scala
inline def printId[A](using t: Any.Def.Tag[A]): Unit

Print to console with row id

Print to console with row id

Same as regular print, but with added first column identifying the object


('A' <> 'F').stream.map(v => (v.Int, v)).print

// Output
----------------- -- --
Id                _1 _2
----------------- -- --
scala.Tuple2@dzkr 65 A
scala.Tuple2@zn1  66 B
scala.Tuple2@71j3 67 C
scala.Tuple2@562u 68 D
scala.Tuple2@c8tt 69 E
scala.Tuple2@p0m8 70 F
----------------- -- --
Inherited from
_print
Source
_print.scala
inline def process[A](foreachFun: A => U, fornilFun: => W): Unit

Process elements or empty case

Process elements or empty case

Applies given function to each stream element or runs second function when stream is empty

 ('A' <>> 'A').stream.process(_.tp, "Empty".tp)

 // Output
 Empty
Inherited from
_process
Source
_process.scala
inline def range[A](using o: Ordering[A]): Range[A]

Range

Range

Computes value range

Fails for empty streams

Inherited from
_calculate
Source
_calculate.scala
inline def rangeOpt[A](using o: Ordering[A]): Opt[Range[A]]

Optional range

Optional range

Computes value value or returns void option for empty streams

Inherited from
_calculate
Source
_calculate.scala
inline def raw[A](using sp: Specialized.Primitive[A]): s.Stream

Specialize

Specialize

Converts current stream into specialized on underlying primitive type. If stream is already specialized, the conversion is a simple cast.

   val s  : Stream[Int]     = 1 <> 10

   val ss : Int.Stream = s.raw

Note: If underlying type is not primitive, the method will not compile

Inherited from
_mutate
Source
_mutate.scala
inline def read[A]: A

Next element

Next element

Delivers next stream element

 val s : Stream[Char] = 'A' <> 'Z'

 s.read.tp  // Prints A
 s.read.tp  // Prints B
 s.read.tp  // Prints C

Note: If stream is empty, read will fail. So, use a safer readOpt in most cases

Inherited from
_read
Source
_read.scala
inline def readOpt[A]: Opt[A]

Next optional element

Next optional element

Delivers next stream element or void option if stream is empty

 val s : Stream[Char] = 'A' <> 'C'

 s.readOpt.tp  // Prints Opt(A)
 s.readOpt.tp  // Prints Opt(B)
 s.readOpt.tp  // Prints Opt(C)
 s.readOpt.tp  // Prints Opt(VOID)
Inherited from
_read
Source
_read.scala
inline def readStream[A](cnt: Int): Stream[A] & Able.Size

Read many elements

Read many elements

Immediatelly removes given number of elements from current stream and returns them as a new stream

 val s : Stream[Int] = 1 <> 12

 s.readStream(3).tp  // Prints Stream(1, 2, 3)
 s.readStream(4).tp  // Prints Stream(4, 5, 6, 7)
 s.readStream(7).tp  // Prints Stream(8, 9, 10, 11, 12)
 s.readStream(8).tp  // Prints Stream()

Note: If requested number of elements is not available, the number returned is less (0 if empty)

Inherited from
_read
Source
_read.scala
inline def reduce[A](f: (A, A) => A): A

Reduce

Reduce

Folds elements with a binary function

   // Calculate sum of first 1000 Ints

   (1 <> 1000).stream.reduce(_ + _) // Returns 500500

Note. Threre is no default value, and if stream is empty, operation fails. Use reduceOpt as safer option

Value Params
f

binary function to fold elements with

Inherited from
_aggregate
Source
_aggregate.scala
inline def REDUCE[A](f: (A, A) => A): A

Heavy reduce

Heavy reduce

Folds elements with a binary function

REDUCE is functionally equivalent to reduce, but is fully inlined. It makes compiled code larger, but guarantees the best possible performance on large streams.

Inherited from
_aggregate
Source
_aggregate.scala
inline def REDUCE_OPT[A](f: (A, A) => A): Opt[A]

Heavy optional reduce

Heavy optional reduce

Folds elements with a binary function

REDUCE_OPT is functionally equivalent to reduceOpt, but is fully inlined. It makes compiled code larger, but guarantees the best possible performance on large streams.

Inherited from
_aggregate
Source
_aggregate.scala
inline def reduceOpt[A](f: (A, A) => A): Opt[A]

Optional reduce

Optional reduce

Folds elements with a binary function or returns empty option when stream is empty

    // Calculate sum of first 1000 Ints

    (1 <> 1000).stream.reduceOpt(_ + _) // Returns Opt(500500)
Value Params
f

binary function to fold elements with

Inherited from
_aggregate
Source
_aggregate.scala
inline def ref[A]: Stream[A]

Generalize

Generalize

If stream is specialized it will be up-cast to general Val.Stream type, and further operations will be general (unless they are specialized, like map)

  val special : Int.Pack  = (1 <> 10).stream.pack

  val general : Pack[Int] = (1 <> 10).stream.ref.pack

  special.getClass.tp // Prints class scalqa.lang.int.g.Pack

  general.getClass.tp // Prints class scalqa.val.pack.z.ArrayPack

Note: This is a true zero cost operation. It does not change byte code (only compiler context)

Inherited from
_mutate
Source
_mutate.scala
inline def repeat[A](times: Int): Stream[A]

Repeat elements

Repeat elements

Creates a new Stream where each elements from current Stream is repeated given number of times

 (0 <> 2).stream.repeat(3).tp

 // Output
 Stream(0, 0, 0, 1, 1, 1, 2, 2, 2)
Inherited from
_extend
Source
_extend.scala
inline def replaceSequence[A](seq: Stream[A], to: Stream[A]): Stream[A]
Inherited from
_mutate
Source
_mutate.scala
inline def replaceSequenceBy[A](f: A => B, seq: Stream[B], to: Stream[A]): Stream[A]
Inherited from
_mutate
Source
_mutate.scala
inline def reverse[A]: Stream[A]

Reverse order

Reverse order

Re-arranges all elements is reverse order

('A' <> 'F').stream.reverse.tp  // Prints Stream(F, E, D, C, B, A)
Inherited from
_mutate
Source
_mutate.scala
inline def reverseEvery[A](size: Int): Stream[A]

Reverse order in segments

Reverse order in segments

Reverses order of elements within segments of fixed size

(1 <> 15).stream.reverseEvery(5).tp

(1 <> 15).stream.reverseEvery(5).reverseEvery(3).reverseEvery(7).tp

// Output
Stream(5, 4, 3, 2, 1, 10, 9, 8, 7, 6, 15, 14, 13, 12, 11)

Stream(7, 2, 1, 10, 5, 4, 3, 12, 11, 6, 15, 14, 9, 8, 13)

Use Case: Predefined Shuffle

For testing purposes it is often required to get elements in random order. However the order cannot be completely random, if we want to replicate bugs

reverseEvery can shuffle elements in a predefined order which looks random

Inherited from
_mutate
Source
_mutate.scala
inline def shuffle[A]: Stream[A]

Randomize order

Randomize order

Re-arranges elements is random order

Note. "reverseEvery" might be a better choice if need repeatable randomness

Inherited from
_mutate
Source
_mutate.scala
inline def sizeLongOpt[A]: Long.Opt

Optional long size

Optional long size

Many streams can return their current element count. If the information is not available, void option is returned

var s = (Int.min.Long <> Int.max.toLong).stream

s.sizeLongOpt.tp    // Prints Long.Opt(4294967296)

s = s.take(_ > 10)  // static sizing is lost

s.sizeLongOpt.tp    // Prints Long.Opt(VOID)
Inherited from
_metadata
Source
_metadata.scala
inline def sizeOpt[A]: Int.Opt

Optional size

Optional size

Many streams can return their current element count. If the information is not available, void option is returned

Note: If size is known, but exceeds integer range, void option is returned. For theses cases use sizeLongOpt

 var s = ('a' <> 'z').stream

 s.sizeOpt.tp         // Prints Int.Opt(26)

 s = s.take(_ > 10)   // static sizing is lost

 s.sizeOpt.tp         // Prints Int.Opt(VOID)
Inherited from
_metadata
Source
_metadata.scala
inline def sliding[A](size: Int, step: Int): Stream[Stream[A]]

Sliding group view

Sliding group view

Example: group size 3 with step 1

 ('a' <> 'g').stream.sliding(3).print

 // Output
 ----------
 ?
 ----------
 Stream(a, b, c)
 Stream(b, c, d)
 Stream(c, d, e)
 Stream(d, e, f)
 Stream(e, f, g)
 ----------

Example: group size 4 with step 2

 ('a' <> 'g').stream.sliding(4,2).print

 // Output
 -------------
 ?
 -------------
 Stream(a, b, c, d)
 Stream(c, d, e, f)
 Stream(e, f, g)
 -------------
Inherited from
_group
Source
_group.scala
inline def sort[A](using o: Ordering[A]): Stream[A]

Sort

Sort

Sorts stream elements with given Ordering

  Stream(5, 1, 4, 2, 3).sort.tp  // Prints Stream(1, 2, 3, 4, 5)
Inherited from
_order
Source
_order.scala
inline def sortBy[A](f1: A => B, f2: A => C, f3: A => D)(using Ordering[B], Ordering[C], Ordering[D]): Stream[A]

Sort by three properties

Sort by three properties

Sorts stream on first property, then if indeterminate on second, etc...

Inherited from
_order
Source
_order.scala
inline def sortBy[A](f1: A => B, f2: A => C)(using Ordering[B], Ordering[C]): Stream[A]

Sort by two properties

Sort by two properties

Sorts stream on first property, and then, if indeterminate on second

Inherited from
_order
Source
_order.scala
inline def sortBy[A](f: A => B)(using o: Ordering[B]): Stream[A]

Sort by property

Sort by property

Sorts stream of elements based on a single property

  Stream("aaaa", "bb", "ccc", "d").sortBy(_.length).tp

  // Output
  Stream(d, bb, ccc, aaaa)
Inherited from
_order
Source
_order.scala
inline def sortReversed[A](using o: Ordering[A]): Stream[A]

Sort reversed

Sort reversed

Reverse sorts stream elements with given Ordering

  Stream(5, 1, 4, 2, 3).sortReversed.tp  // Prints Stream(5, 4, 3, 2, 1)
Inherited from
_order
Source
_order.scala
inline def splitAt[A](positions: Int*): Stream[Stream[A]]

Positional split

Positional split

Splits Stream at specified positions

val (s1,s2,s3) = (0 <> 20).stream.splitAt(5, 15).tuple3

s1.tp   // Prints Stream(0, 1, 2, 3, 4)
s2.tp   // Prints Stream(5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
s3.tp   // Prints Stream(15, 16, 17, 18, 19, 20)

Note. The same could be accomplished with readStream

val s3 = (0 <> 20).stream
val s1 = s3.readStream(5)
val s2 = s3.readStream(10)
	```
Inherited from
_group
Source
_group.scala
inline def startsWithSequence[A](v: Stream[A]): Boolean

Equal start check

Equal start check

Checks if starting elements of two streams (to a point where one stream ends) are equal

Inherited from
_evaluate
Source
_evaluate.scala
inline def startsWithSequenceResult[A](v: Stream[A]): Result[true]

Equal start check

Equal start check

Checks if starting elements of two streams (to a point where one stream ends) are equal

(0 <> 10).stream.equalsStartResult(0 <> 1000).tp
// Prints: Result(true)

(0 <> 1000).stream.equalsStartResult(0 <> 10).tp
// Prints: Result(true)

((0 <> 5).stream + 7 + 8).equalsStartResult(0 <> 10).tp
// Prints: Result(Problem(Fail at index 6: 7 != 6))

Note: The returned problem result contains message with basic description

Inherited from
_evaluate
Source
_evaluate.scala
inline def sum[A](using v: Math.Sum[A]): A

Sum

Sum

Calculates sum of all values

For empty stream returns zero

    (1 <> 1000).stream.sum.tp // Prints 500500
Inherited from
_calculate
Source
_calculate.scala
inline def sumFew[A](fb: A => Opt[B], fc: A => Opt[C], fd: A => Opt[D], fe: A => Opt[E], ff: A => Opt[F])(using nb: Math.Sum[B], nc: Math.Sum[C], nd: Math.Sum[D], ne: Math.Sum[E], nf: Math.Sum[F]): (B, C) | (B, C, D) | (B, C, D, E) | (B, C, D, E, F)

Multi sum

Multi sum

Simultaneously computes up to 5 sum values for properties specified by given functions

Returns tuple of appropriate size with values corresponding to the given mappings

For empty Stream returned tuple will hold zeros

 (1 <> 1000).stream.sumFew(_ * 10, _ * 100).tp  // Prints (5005000, 50050000)

 val (first, second, third) = (1 <> 1000).stream.sumFew(v => v, _ * 10, _ * 100)

 first.tp     // Prints 500500
 second.tp    // Prints 5005000
 third.tp     // Prints 50050000
Inherited from
_calculate
Source
_calculate.scala
inline def sumOpt[A](using v: Math.Sum[A]): Opt[A]

Optional sum

Optional sum

Calculates sum of all values or returns void option for empty streams

    (1 <> 1000).stream.sumOpt.tp // Prints Opt(500500)
Inherited from
_calculate
Source
_calculate.scala
inline def synchronize[A]: Stream[A]

Synchronize access

Synchronize access

Nothing fancy, just a convenience "synchronized" wrapper

 val nonSyncStream: Stream[Int] = (0 <>> 10000).stream

 (1 <> 10000).stream.parallel.map(_ => nonSyncStream.read ).stream.sort.takeDuplicates.count.tp  // Prints 0 to few hundred count

 val syncStream: Stream[Int] = (0 <>> 10000).stream.synchronize

 (1 <> 10000).stream.parallel.map(_ => syncStream.read ).stream.sort.takeDuplicates.count.tp    // Prints 0
Inherited from
_mutate
Source
_mutate.scala
inline def take[A](f: A => Boolean): Stream[A]

Main filter

Main filter

Only takes Stream elements satisfying the given function

  (0 <>> 10).stream.take(_ > 5).tp

  // Output
  Stream(6, 7, 8, 9)

Note: Traditional method filter is also available and can be used, but take is prefferable in most cases.

Inherited from
_take
Source
_take.scala
inline def TAKE[A](f: A => Boolean): Stream[A]

Heavy filter

Heavy filter

Filters Stream elements according to given function

TAKE is functionally equivalent to take, but is fully inlined. It makes compiled code larger, but guarantees the best possible performance on large streams.

Inherited from
_take
Source
_take.scala
inline def takeDuplicates[A]: Stream[A]

Duplicates filter

Duplicates filter

Takes only elements equal to the passed in prior position

Note: To generally get all duplicates, the stream must be sorted to arrange them in sequence

   Stream(1,1,2,3,3,4,5,5,5).takeDuplicates.tp

   // Output
   Stream(1, 3, 5, 5)
Inherited from
_take
Source
_take.scala
inline def takeDuplicatesBy[A](f: A => B): Stream[A]

Mapped duplicates filter

Mapped duplicates filter

Takes only elements, which evaluate to the same value as elements passed in prior position

Note: To generally get all duplicates, the stream must be sorted by the mapping function

  (0 <> 10).stream.takeDuplicatesBy(_ / 2).tp

  // Output
  Stream(1, 3, 5, 7, 9)
Inherited from
_take
Source
_take.scala
inline def takeEvery[A](nTh: Int): Stream[A]

Every Nth element filter

Every Nth element filter

Only lets every nTh element

  (1 <> 20).stream.takeEvery(4).tp   // Prints: Stream(4, 8, 12, 16, 20)
Inherited from
_take
Source
_take.scala
inline def takeFirst[A](n: Int): Stream[A]

Head filter

Head filter

Only takes given number of first elements

  (1 <> 10).stream.takeFirst(3).tp  // Prints  Stream(1, 2, 3)
Inherited from
_take
Source
_take.scala
inline def takeIndexed[A](f: (Int, A) => Boolean, start: Int): Stream[A]

Indexed filter

Indexed filter

Only lets elements satisfying the given function, which also accepts element sequential index

  ('a' <> 'z').stream.takeIndexed((i, _) => i >= 2 && i <= 7, 1).tp

  // Output
  Stream(b, c, d, e, f, g)

Note: By default indexing starts from 0, but starting value can also be explicitly specified.

Inherited from
_take
Source
_take.scala
inline def takeLast[A](n: Int): Stream[A]

Tail filter

Tail filter

Only takes given number of elements coming last

  (1 <> 10).stream.takeLast(3).tp  // Prints  Stream(8, 9, 10)

Note: This method will block on unlimited streams

Inherited from
_take
Source
_take.scala
inline def takeOnly[A](v: A): Stream[A]

Single value filter

Single value filter

Filters only specified value.

  (0 <>> 10).stream.takeOnly(5).tp

  // Output
  Stream(5)

Note: takeOnly is more efficient than general filter ".take(_ == value)", because there is no function involved.

Inherited from
_take
Source
_take.scala
inline def takeRange[A](i: Int.Range): Stream[A]

Range filter

Range filter

Only allows elements withing specified sequencial range

  ('a' <> 'z').stream.takeRange(1 <> 7).tp

  // Output
  Stream(b, c, d, e, f, g, h)

Note: Range indexing starts from 0

Inherited from
_take
Source
_take.scala
inline def takeType[A](using t: scala.reflect.ClassTag[B]): Stream[B]

Type filter

Type filter

Only lets elements of specified type

  Stream(1, '2', "3", new Object(), 0.0).takeType[String].tp  // Prints: Stream(3)
Inherited from
_take
Source
_take.scala
inline def takeValues[A](v: Stream[A]): Stream[A]

Multi value filter

Multi value filter

Takes only provided set of values

    ('a' <> 'z').stream.takeValues('z','x','b').tp   // Prints Stream('b','x','y')

    ('a' <> 'z').stream.takeValues('b' <> 'f').tp    // Prints Stream('b','c','d','e','f')

Note: takeValues is macro optimized when given value tuples sized from 2 to 5

Inherited from
_take
Source
_take.scala
inline def takeValuesBy[A](f: A => B, v: Stream[B]): Stream[A]

Mapped multi value filter

Mapped multi value filter

Takes only values, which convert to provided set of values

  (0 <>> 10).stream.takeValuesBy(_ % 5, (1,3) ).tp

  // Output
  Stream(1, 3, 6, 8)

Note: takeValuesBy is macro optimized when given value tuples sized from 2 to 5

Inherited from
_take
Source
_take.scala
inline def takeWhile[A](f: A => Boolean): Stream[A]

Conditional head filter

Conditional head filter

Only takes first consecutive elements satisfying the condition

  def stream = (1 <> 5).stream ++ (1 <> 5)

  stream.tp                     // Prints Stream(1, 2, 3, 4, 5, 1, 2, 3, 4, 5)

  stream.takeWhile(_ <= 3).tp    // Prints Stream(1, 2, 3)

Note: Everything starting from the first non compliant element will be discarded (including later compliant elements)

Inherited from
_take
Source
_take.scala
inline def toArray[A](using t: scala.reflect.ClassTag[A], s: Specialized[A]): s.Array

Convert to Array

Convert to Array

Returns stream elements as Array

 val a : Array[Int] =  (1 <> 10).stream.toArray
Inherited from
_toCollections
Source
_toCollections.scala
inline def toBuffer[A](using s: Specialized[A]): s.Buffer

Convert to Buffer

Convert to Buffer

Returns stream elements as Buffer

Inherited from
_toCollections
Source
_toCollections.scala
inline def toIdx[A](using s: Specialized[A]): s.Idx

Convert to Idx

Convert to Idx

Returns stream elements as Idx

Inherited from
_toCollections
Source
_toCollections.scala
inline def toJavaIterator[A]: java.util.Iterator[A]

Convert to Java Iterator

Convert to Java Iterator

Wraps current stream as java.util.Iterator

Inherited from
_toJava
Source
_toJava.scala
inline def toJavaList[A]: java.util.List[A]

Convert to Java List

Convert to Java List

Returns stream elements as java.util.List

Inherited from
_toJava
Source
_toJava.scala
inline def toJavaSpliterator[A](splitSize: Int): java.util.Spliterator[A]

Convert to Java Spliterator

Convert to Java Spliterator

Wraps current stream as java.util.Spliterator

Inherited from
_toJava
Source
_toJava.scala
inline def toJavaStream[A](parallel: Boolean): java.util.stream.Stream[A]

Convert to Java Stream

Convert to Java Stream

Wraps current stream as java.util.stream.Stream

Inherited from
_toJava
Source
_toJava.scala
inline def toList[A]: scala.collection.immutable.List[A]

Convert to List

Convert to List

Returns stream elements as scala.collection.immutable.List

Inherited from
_toScala
Source
_toScala.scala
inline def toLookup[KEY, VALUE](using KEY: Specialized[KEY]): KEY.Lookup[VALUE]

Convert to Lookup

Convert to Lookup

Note. This operation is only available for streams holding tuples, like (KEY,VALUE)

Converts a stream of tuples to Lookup

val intLookup : Lookup[Int,Char] = ('A' <> 'F').stream.zipKey(_.toInt).toLookup

intLookup.pairStream.tp   // Prints Stream((69,E), (70,F), (65,A), (66,B), (67,C), (68,D))

val charLookup : Lookup[Char,Int] = ('A' <> 'F').stream.zipValue(_.toInt).toLookup

charLookup.pairStream.tp   // Prints Stream((E,69), (F,70), (A,65), (B,66), (C,67), (D,68))
Inherited from
_toCollections
Source
_toCollections.scala
inline def toLookupBy[A](f: A => KEY)(using KEY: Specialized[KEY]): KEY.Lookup[A]

Convert to Lookup

Convert to Lookup

Converts stream to a Lookup collection, where key is created with provided function

val intLookup : Lookup[Int,Char] = ('A' <> 'F').stream.toLookupBy(_.toInt)

intLookup.pairStream.tp   // Prints Stream((69,E), (70,F), (65,A), (66,B), (67,C), (68,D))
Inherited from
_toCollections
Source
_toCollections.scala
inline def toMap[KEY, VALUE]: scala.collection.immutable.Map[KEY, VALUE]

Convert to scala.Map

Convert to scala.Map

Note. This operation is only available for streams holding tuples, like (KEY,VALUE)

Converts a stream of tuples to scala.Map

Inherited from
_toScala
Source
_toScala.scala
inline def toMapBy[A](f: A => B): scala.collection.immutable.Map[B, A]

Convert to scala.Map

Convert to scala.Map

Converts stream to scala.Map, where key is created with provided function

Inherited from
_toScala
Source
_toScala.scala
inline def toProduct[A]: scala.Product

Convert to Product

Convert to Product

Returns stream elements as scala.Product

Inherited from
_toScala
Source
_toScala.scala
inline def toSeq[A]: scala.collection.immutable.IndexedSeq[A]

Convert to Seq

Convert to Seq

Returns stream elements as scala.collection.immutable.IndexedSeq

Inherited from
_toScala
Source
_toScala.scala
inline def toSet[A](using s: Specialized[A]): s.Set

Convert to unique collection

Convert to unique collection

Returns stream elements as Set

Inherited from
_toCollections
Source
_toCollections.scala
inline def toText[A](using t: Any.Def.Tag[A]): String

Elements as multi-line String

Elements as multi-line String

Returns all elements as String formatted table

If elements implement Able.Doc, each 'doc' property value is placed in a different column

If elements implement scala.Product (like all Tuples), each Product element is placed in a different column

  ('a' <> 'e').stream.map(v => (v + "1", v + "2", v + "3", v + "4", v + "5")).toText.tp

  // Output
  -- -- -- -- --
  ?  ?  ?  ?  ?
  -- -- -- -- --
  a1 a2 a3 a4 a5
  b1 b2 b3 b4 b5
  c1 c2 c3 c4 c5
  d1 d2 d3 d4 d5
  e1 e2 e3 e4 e5
  -- -- -- -- --
Inherited from
_toString
Source
_toString.scala
inline def toVector[A]: scala.collection.immutable.Vector[A]

Convert to Vector

Convert to Vector

Returns stream elements as scala.collection.immutable.Vector

Inherited from
_toScala
Source
_toScala.scala
inline def transpose[A](using f: A => Stream[B]): Stream[Stream[B]]

Transpose

Transpose

Transposes matrix where rows become columns

 def stream : Stream[Stream[Int]] = Stream(11 <> 15,
                             List(21, 22, 23, 24, 25),
                             Vector(31, 32, 33, 34, 35))

 stream.print

 stream.transpose.print

 // Output
 ---------------------
 ?
 ---------------------
 Stream(11, 12, 13, 14, 15)
 Stream(21, 22, 23, 24, 25)
 Stream(31, 32, 33, 34, 35)
 ---------------------

 -------------
 ?
 -------------
 Stream(11, 21, 31)
 Stream(12, 22, 32)
 Stream(13, 23, 33)
 Stream(14, 24, 34)
 Stream(15, 25, 35)
 -------------
Inherited from
_mutate
Source
_mutate.scala
inline def tuple10[A]: (A, A, A, A, A, A, A, A, A, A)

Convert to Tuple10

Convert to Tuple10

If Stream has less then 10 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple11[A]: (A, A, A, A, A, A, A, A, A, A, A)

Convert to Tuple11

Convert to Tuple11

If Stream has less then 11 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple12[A]: (A, A, A, A, A, A, A, A, A, A, A, A)

Convert to Tuple12

Convert to Tuple12

If Stream has less then 12 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple13[A]: (A, A, A, A, A, A, A, A, A, A, A, A, A)

Convert to Tuple13

Convert to Tuple13

If Stream has less then 13 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple14[A]: (A, A, A, A, A, A, A, A, A, A, A, A, A, A)

Convert to Tuple14

Convert to Tuple14

If Stream has less then 14 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple15[A]: (A, A, A, A, A, A, A, A, A, A, A, A, A, A, A)

Convert to Tuple15

Convert to Tuple15

If Stream has less then 15 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple16[A]: (A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A)

Convert to Tuple16

Convert to Tuple16

If Stream has less then 16 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple17[A]: (A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A)

Convert to Tuple17

Convert to Tuple17

If Stream has less then 17 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple18[A]: (A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A)

Convert to Tuple18

Convert to Tuple18

If Stream has less then 18 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple19[A]: (A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A)

Convert to Tuple19

Convert to Tuple19

If Stream has less then 19 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple2[A]: (A, A)

Convert to Tuple2

Convert to Tuple2

If Stream has less then 2 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple20[A]: (A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A)

Convert to Tuple20

Convert to Tuple20

If Stream has less then 20 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple21[A]: (A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A)

Convert to Tuple21

Convert to Tuple21

If Stream has less then 21 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple22[A]: (A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A)

Convert to Tuple22

Convert to Tuple22

If Stream has less then 22 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple3[A]: (A, A, A)

Convert to Tuple3

Convert to Tuple3

If Stream has less then 3 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple4[A]: (A, A, A, A)

Convert to Tuple4

Convert to Tuple4

If Stream has less then 4 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple5[A]: (A, A, A, A, A)

Convert to Tuple5

Convert to Tuple5

If Stream has less then 5 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple6[A]: (A, A, A, A, A, A)

Convert to Tuple6

Convert to Tuple6

If Stream has less then 6 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple7[A]: (A, A, A, A, A, A, A)

Convert to Tuple7

Convert to Tuple7

If Stream has less then 7 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple8[A]: (A, A, A, A, A, A, A, A)

Convert to Tuple8

Convert to Tuple8

If Stream has less then 8 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def tuple9[A]: (A, A, A, A, A, A, A, A, A)

Convert to Tuple9

Convert to Tuple9

If Stream has less then 9 elements, the operation will fail.

Inherited from
_toTuple
Source
_toTuple.scala
inline def unfold[A](f: Stream[A] => Opt[A]): Stream[A]

Lazy generator

Lazy generator

Lazily unfolds next stream value with a function taking all prior values

If the given function returns void option, the stream ends

 // Unfolding Fibonacci Sequence

 (0 <> 1).stream.unfold(_.takeLast(2).sum).takeFirst(20).tp

 // Output
 Stream(0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181)

Note: Method .takeFirst(20) is needed, because otherwise the stream will never end and would be hard to print out

Inherited from
_extend
Source
_extend.scala
inline def unzip[A](using f: A => (B, C)): (Stream[B], Stream[C])

Unzips stream in two

Unzips stream in two

Unzips a stream of tupled values in two

 val pairs = ('a' <> 'g').stream.zipValue(_.toUpper).pack

 pairs.stream.tp  // Prints Stream((a,A), (b,B), (c,C), (d,D), (e,E), (f,F), (g,G))

 val (left, right) = pairs.stream.unzip

 left.tp   // Prints Stream(a, b, c, d, e, f, g)

 right.tp  // Prints Stream(G, F, E, D, C, B, A)
Inherited from
_zip
Source
_zip.scala
inline def zip[A](that: Stream[B]): Stream[(A, B)]

Merge

Merge

Merges two streams in one, creating tuples of corresponding elements

  (1 <> 100).stream.zip('A' <> 'D').tp  // Prints Stream((1,A), (2,B), (3,C), (4,D))

If one of the streams is shorter, the excess elements are lost

Inherited from
_zip
Source
_zip.scala
inline def zipAll[A](that: Stream[B], thisDflt: Opt[A], thatDflt: Opt[B]): Stream[(A, B)]

Merge stream

Merge stream

Merges two streams in one, creating tuples of corresponding elements

If one of the streams is shorter, the provided defaults are used. If the default is not available, operation fails

  ('a' <> 'f').stream.zip('A' <> 'H', '?', '?').tp

  // Output
  Stream((a,A), (b,B), (c,C), (d,D), (e,E), (f,F), (?,G), (?,H))
Value Params
that

the stream to merge with this

thatDflt

if that Stream has fewer elements, ''thatDflt'' will be used to fill the voids. Fails if ''thatDflt'' is required, but not available

thisDflt

if this Stream has fewer elements, ''thisDflt'' will be used to fill the voids. Fails if ''thisDflt'' is required, but not available

Inherited from
_zip
Source
_zip.scala
inline def zipFoldAs[A](start: B, f: (B, A) => B): Stream[(A, B)]

Merges current folding value

Merges current folding value

(1 <> 7).stream.zipFoldAs(0L)(_ + _).print

// "Running Total" Output
-- --
?  ?
-- --
1  1
2  3
3  6
4  10
5  15
6  21
7  28
Inherited from
_zip
Source
_zip.scala
inline def zipIndex[A](start: Int): Stream[(Int, A)]

Merge number Creates a new Stream with elements paired with their sequential position Note: Index is the first element in the resulting tuples.

Merge number Creates a new Stream with elements paired with their sequential position Note: Index is the first element in the resulting tuples.

   ('A' <> 'F').stream.zipIndex('A'.toInt) tp  // Prints Stream((65,A), (66,B), (67,C), (68,D), (69,E), (70,F))
Value Params
start

index initial value

Inherited from
_zip
Source
_zip.scala
inline def zipIndex[A]: Stream[(Int, A)]

Merge index

Merge index

Creates a new Stream with elements paired with their sequential position, starting at 0

  ('A' <> 'F').stream.zipIndex.tp

  // Output

  Stream((0,A), (1,B), (2,C), (3,D), (4,E), (5,F))

Note: Index is the first element in the resulting tuples

Inherited from
_zip
Source
_zip.scala
inline def zipKey[A](f: A => B): Stream[(B, A)]

Merge property first

Merge property first

Creates a new Stream with elements paired with their property, defined by given function

The paired value is in the first tuple position

  ('A' <> 'F').stream.zipKey(_.toInt).tp  // Prints Stream((65,A), (66,B), (67,C), (68,D), (69,E), (70,F))
Inherited from
_zip
Source
_zip.scala
inline def zipNext[A]: Stream[(A, Opt[A])]

Merge with next

Merge with next

Creates new Stream with elements paired with the optional next element

  (1 <> 5).stream.zipNext.tp  // Prints Stream((1,Opt(2)), (2,Opt(3)), (3,Opt(4)), (4,Opt(5)), (5,Opt(VOID)))
Inherited from
_zip
Source
_zip.scala
inline def zipPrior[A]: Stream[(Opt[A], A)]

Merge with prior

Merge with prior

Creates new Stream with elements paired with the optional prior element

  (1 <> 5).stream.zipPrior.tp  // Prints Stream((Opt(VOID),1), (Opt(1),2), (Opt(2),3), (Opt(3),4), (Opt(4),5))
Inherited from
_zip
Source
_zip.scala
inline def zipValue[A](f: A => B): Stream[(A, B)]

Merge property

Merge property

Creates a new Stream with elements paired with their property, defined by given function

The paired value is in the second tuple position

  ('A' <> 'F').stream.zipValue(_.toInt).tp  // Prints Stream((A,65), (B,66), (C,67), (D,68), (E,69), (F,70))
Inherited from
_zip
Source
_zip.scala