streamly-core-0.2.2: Streaming, parsers, arrays, serialization and more
Copyright(c) 2020 Composewell Technologies
LicenseBSD-3-Clause
Maintainerstreamly@composewell.com
Stabilitypre-release
PortabilityGHC
Safe HaskellSafe-Inferred
LanguageHaskell2010

Streamly.Data.Parser

Description

Parsers are stream consumers like folds with the following differences:

  • folds cannot fail but parsers can fail and backtrack.
  • folds can be composed as a Tee but parsers cannot.
  • folds can be used for scanning but parsers cannot.
  • folds can be converted to parsers.

This module implements parsers with stream fusion which compile to efficient loops comparable to the speed of C.

Using Parsers

This module provides elementary parsers and parser combinators that can be used to parse a stream of data. Additionally, all the folds from the Streamly.Data.Fold module can be converted to parsers using fromFold. All the parsing functionality provided by popular parsing libraries, and more is available. Also see Streamly.Unicode.Parser module for Char stream parsers.

A data stream can be transformed to a stream of parsed data elements. Parser combinators can be used to create a pipeline of folds or parsers such that the next fold or parser consumes the result of the previous parser. See parse and parseMany to run these parsers on a stream.

Parser vs ParserK

There are two functionally equivalent parsing modules, Streamly.Data.Parser (this module) and Streamly.Data.ParserK. The latter is a CPS based wrapper over the former, and can be used for parsing in general. Streamly.Data.Parser enables stream fusion and should be preferred over Streamly.Data.ParserK for high performance stream parsing use cases. However, there are a few cases where this module is not suitable and ParserK should be used instead.

For static fusion, parser combinators have to use strict pattern matching on arguments of type Parser. This leads to infinte loop when a parser is defined recursively, due to strict evaluation of the recursive call. For example, the following implementation loops infinitely because of the recursive use of parser p in the *> combinator:

>>> import Streamly.Data.Parser (Parser)
>>> import qualified Streamly.Data.Fold as Fold
>>> import qualified Streamly.Data.Parser as Parser
>>> import qualified Streamly.Data.Stream as Stream
>>> import Control.Applicative ((<|>))
>>> :{
>>> p :: Monad m => Parser Char m String
>>> p = Parser.satisfy (== '(') *> p <|> Parser.fromFold Fold.toList
>>> :}

Use ParserK when recursive use is required:

>>> import Streamly.Data.ParserK (ParserK)
>>> import qualified Streamly.Data.StreamK as StreamK
>>> import qualified Streamly.Internal.Data.StreamK as StreamK (parse)
>>> import qualified Streamly.Internal.Data.ParserK as ParserK (adapt)
>>> :{
>>> p :: Monad m => ParserK Char m String
>>> p = ParserK.adapt (Parser.satisfy (== '(')) *> p <|> ParserK.adapt (Parser.fromFold Fold.toList)
>>> :}
>>> StreamK.parse p $ StreamK.fromStream $ Stream.fromList "hello"
Right "hello"

For this reason Applicative, Alternative or Monad compositions with recursion cannot be used with the Parser type. Alternative type class based operations like asum and Alternative based generic parser combinators use recursion. Similarly, Applicative type class based operations like sequence use recursion. Custom implementations of many such operations are provided in this module (e.g. some, many), and those should be used instead.

Another limitation of Parser type is due to the quadratic complexity causing slowdown when too many nested compositions are used. Especially Applicative, Monad, Alternative instances, and sequenced parsing operations (e.g. nested one, and splitWith) degrade the performance quadratically (O(n^2)) when combined n times, roughly 8 or less sequenced parsers are fine. READ THE DOCS OF APPLICATIVE, MONAD AND ALTERNATIVE INSTANCES.

Streaming Parsers

With ParserK you can use the generic Alternative type class based parsers from the parser-combinators library or similar. However, we recommend that you use the equivalent functionality from this module for better performance and for streaming behavior.

Firstly, the combinators in this module are faster due to stream fusion. Secondly, these are streaming in nature as the results can be passed directly to other stream consumers (folds or parsers). The Alternative type class based parsers would end up buffering all the results in lists before they can be consumed.

When recursion or heavy nesting is needed use ParserK.

Error Reporting

These parsers do not report the error context (e.g. line number or column). This may be supported in future.

Monad Transformer Stack

MonadTrans instance is not provided. If the Parser type is the top most layer (which should be the case almost always) you can just use fromEffect to execute the lower layer monad effects.

Parser vs ParserK Implementation

The Parser type represents a stream consumer by composing state as data which enables stream fusion. Stream fusion generates a tight loop without any constructor allocations between the stages, providing C like performance for the loop. Stream fusion works when multiple functions are combined in a pipeline statically. Therefore, the operations in this module must be inlined and must not be used recursively to allow for stream fusion.

The ParserK type represents a stream consumer by composing function calls, therefore, a function call overhead is incurred at each composition. It is quite fast in general but may be a few times slower than a fused parser. However, it allows for scalable dynamic composition especially parsers can be used in recursive calls. Using the ParserK type operations like splitWith provide linear (O(n)) performance with respect to the number of compositions.

Experimental APIs

Please refer to Streamly.Internal.Data.Parser for functions that have not yet been released.

Synopsis

Setup

To execute the code examples provided in this module in ghci, please run the following commands first.

>>> :m
>>> import Control.Applicative ((<|>))
>>> import Data.Bifunctor (second)
>>> import Data.Char (isSpace)
>>> import qualified Data.Foldable as Foldable
>>> import qualified Data.Maybe as Maybe
>>> import Streamly.Data.Fold (Fold)
>>> import Streamly.Data.Parser (Parser)
>>> import qualified Streamly.Data.Fold as Fold
>>> import qualified Streamly.Data.Parser as Parser
>>> import qualified Streamly.Data.Stream as Stream

For APIs that have not been released yet.

>>> import qualified Streamly.Internal.Data.Fold as Fold
>>> import qualified Streamly.Internal.Data.Parser as Parser

Parser Type

data Parser a m b Source #

A parser is a fold that can fail and is represented as Parser step initial extract. Before we drive a parser we call the initial action to retrieve the initial state of the fold. The parser driver invokes step with the state returned by the previous step and the next input element. It results into a new state and a command to the driver represented by Step type. The driver keeps invoking the step function until it stops or fails. At any point of time the driver can call extract to inspect the result of the fold. If the parser hits the end of input extract is called. It may result in an error or an output value.

Pre-release

Instances

Instances details
Monad m => MonadFail (Parser a m) Source #
>>> fail = Parser.die
Instance details

Defined in Streamly.Internal.Data.Parser.Type

Methods

fail :: String -> Parser a m a0 Source #

MonadIO m => MonadIO (Parser a m) Source #
>>> liftIO = Parser.fromEffect . liftIO
Instance details

Defined in Streamly.Internal.Data.Parser.Type

Methods

liftIO :: IO a0 -> Parser a m a0 Source #

Monad m => Alternative (Parser a m) Source #

READ THE CAVEATS in alt before using this instance.

>>> empty = Parser.die "empty"
>>> (<|>) = Parser.alt
>>> many = flip Parser.many Fold.toList
>>> some = flip Parser.some Fold.toList
Instance details

Defined in Streamly.Internal.Data.Parser.Type

Methods

empty :: Parser a m a0 Source #

(<|>) :: Parser a m a0 -> Parser a m a0 -> Parser a m a0 Source #

some :: Parser a m a0 -> Parser a m [a0] Source #

many :: Parser a m a0 -> Parser a m [a0] Source #

Monad m => Applicative (Parser a m) Source #

READ THE CAVEATS in splitWith before using this instance.

>>> pure = Parser.fromPure
>>> (<*>) = Parser.splitWith id
>>> (*>) = Parser.split_
Instance details

Defined in Streamly.Internal.Data.Parser.Type

Methods

pure :: a0 -> Parser a m a0 Source #

(<*>) :: Parser a m (a0 -> b) -> Parser a m a0 -> Parser a m b Source #

liftA2 :: (a0 -> b -> c) -> Parser a m a0 -> Parser a m b -> Parser a m c Source #

(*>) :: Parser a m a0 -> Parser a m b -> Parser a m b Source #

(<*) :: Parser a m a0 -> Parser a m b -> Parser a m a0 Source #

Functor m => Functor (Parser a m) Source #

Map a function on the result i.e. on b in Parser a m b.

Instance details

Defined in Streamly.Internal.Data.Parser.Type

Methods

fmap :: (a0 -> b) -> Parser a m a0 -> Parser a m b Source #

(<$) :: a0 -> Parser a m b -> Parser a m a0 Source #

Monad m => Monad (Parser a m) Source #

READ THE CAVEATS in concatMap before using this instance.

>>> (>>=) = flip Parser.concatMap
Instance details

Defined in Streamly.Internal.Data.Parser.Type

Methods

(>>=) :: Parser a m a0 -> (a0 -> Parser a m b) -> Parser a m b Source #

(>>) :: Parser a m a0 -> Parser a m b -> Parser a m b Source #

return :: a0 -> Parser a m a0 Source #

Parsers

From Folds

fromFold :: Monad m => Fold m a b -> Parser a m b Source #

Make a Parser from a Fold. This parser sends all of its input to the fold.

Without Input

fromPure :: Monad m => b -> Parser a m b Source #

A parser that always yields a pure value without consuming any input.

fromEffect :: Monad m => m b -> Parser a m b Source #

A parser that always yields the result of an effectful action without consuming any input.

die :: Monad m => String -> Parser a m b Source #

A parser that always fails with an error message without consuming any input.

peek :: Monad m => Parser a m a Source #

Peek the head element of a stream, without consuming it. Fails if it encounters end of input.

>>> Stream.parse ((,) <$> Parser.peek <*> Parser.satisfy (> 0)) $ Stream.fromList [1]
Right (1,1)
peek = lookAhead (satisfy True)

eof :: Monad m => Parser a m () Source #

Succeeds if we are at the end of input, fails otherwise.

>>> Stream.parse ((,) <$> Parser.satisfy (> 0) <*> Parser.eof) $ Stream.fromList [1]
Right (1,())

Element parsers

one :: Monad m => Parser a m a Source #

Consume one element from the head of the stream. Fails if it encounters end of input.

>>> one = Parser.satisfy $ const True

oneOf :: (Monad m, Eq a, Foldable f) => f a -> Parser a m a Source #

Match any one of the elements in the supplied list.

>>> oneOf xs = Parser.satisfy (`Foldable.elem` xs)

When performance matters a pattern matching predicate could be more efficient than a Foldable datatype:

let p x =
   case x of
      a -> True
      e -> True
       _  -> False
in satisfy p

GHC may use a binary search instead of linear search in the list. Alternatively, you can also use an array instead of list for storage and search.

noneOf :: (Monad m, Eq a, Foldable f) => f a -> Parser a m a Source #

See performance notes in oneOf.

>>> noneOf xs = Parser.satisfy (`Foldable.notElem` xs)

satisfy :: Monad m => (a -> Bool) -> Parser a m a Source #

Returns the next element if it passes the predicate, fails otherwise.

>>> Stream.parse (Parser.satisfy (== 1)) $ Stream.fromList [1,0,1]
Right 1
>>> toMaybe f x = if f x then Just x else Nothing
>>> satisfy f = Parser.maybe (toMaybe f)

Sequences

streamEqBy :: Monad m => (a -> a -> Bool) -> Stream m a -> Parser a m () Source #

Like listEqBy but uses a stream instead of a list and does not return the stream.

listEqBy :: Monad m => (a -> a -> Bool) -> [a] -> Parser a m [a] Source #

Match the given sequence of elements using the given comparison function. Returns the original sequence if successful.

Definition:

>>> listEqBy cmp xs = Parser.streamEqBy cmp (Stream.fromList xs) *> Parser.fromPure xs

Examples:

>>> Stream.parse (Parser.listEqBy (==) "string") $ Stream.fromList "string"
Right "string"
>>> Stream.parse (Parser.listEqBy (==) "mismatch") $ Stream.fromList "match"
Left (ParseError "streamEqBy: mismtach occurred")

listEq :: (Monad m, Eq a) => [a] -> Parser a m [a] Source #

Match the input sequence with the supplied list and return it if successful.

>>> listEq = Parser.listEqBy (==)

Combinators

Mapping on input

lmap :: (a -> b) -> Parser b m r -> Parser a m r Source #

lmap f parser maps the function f on the input of the parser.

>>> Stream.parse (Parser.lmap (\x -> x * x) (Parser.fromFold Fold.sum)) (Stream.enumerateFromTo 1 100)
Right 338350
lmap = Parser.lmapM return

lmapM :: Monad m => (a -> m b) -> Parser b m r -> Parser a m r Source #

lmapM f parser maps the monadic function f on the input of the parser.

Map on output

rmapM :: Monad m => (b -> m c) -> Parser a m b -> Parser a m c Source #

rmapM f parser maps the monadic function f on the output of the parser.

>>> rmap = fmap

Filtering

filter :: Monad m => (a -> Bool) -> Parser a m b -> Parser a m b Source #

Include only those elements that pass a predicate.

>>> Stream.parse (Parser.filter (> 5) (Parser.fromFold Fold.sum)) $ Stream.fromList [1..10]
Right 40

Look Ahead

lookAhead :: Monad m => Parser a m b -> Parser a m b Source #

Run a parser without consuming the input.

Tokenize by length

takeEQ :: Monad m => Int -> Fold m a b -> Parser a m b Source #

Stops after taking exactly n input elements.

  • Stops - after consuming n elements.
  • Fails - if the stream or the collecting fold ends before it can collect exactly n elements.
>>> Stream.parse (Parser.takeEQ 2 Fold.toList) $ Stream.fromList [1,0,1]
Right [1,0]
>>> Stream.parse (Parser.takeEQ 4 Fold.toList) $ Stream.fromList [1,0,1]
Left (ParseError "takeEQ: Expecting exactly 4 elements, input terminated on 3")

Tokenize by predicate

takeWhile :: Monad m => (a -> Bool) -> Fold m a b -> Parser a m b Source #

Collect stream elements until an element fails the predicate. The element on which the predicate fails is returned back to the input stream.

  • Stops - when the predicate fails or the collecting fold stops.
  • Fails - never.
>>> Stream.parse (Parser.takeWhile (== 0) Fold.toList) $ Stream.fromList [0,0,1,0,1]
Right [0,0]
>>> takeWhile cond f = Parser.takeWhileP cond (Parser.fromFold f)

We can implement a breakOn using takeWhile:

breakOn p = takeWhile (not p)

takeWhile1 :: Monad m => (a -> Bool) -> Fold m a b -> Parser a m b Source #

Like takeWhile but takes at least one element otherwise fails.

>>> takeWhile1 cond p = Parser.takeWhileP cond (Parser.takeBetween 1 maxBound p)

dropWhile :: Monad m => (a -> Bool) -> Parser a m () Source #

Drain the input as long as the predicate succeeds, running the effects and discarding the results.

This is also called skipWhile in some parsing libraries.

>>> dropWhile p = Parser.takeWhile p Fold.drain

wordBy :: Monad m => (a -> Bool) -> Fold m a b -> Parser a m b Source #

Like splitOn but strips leading, trailing, and repeated separators. Therefore, ".a..b." having . as the separator would be parsed as ["a","b"]. In other words, its like parsing words from whitespace separated text.

  • Stops - when it finds a word separator after a non-word element
  • Fails - never.
>>> wordBy = Parser.wordFramedBy (const False) (const False) (const False)
S.wordsBy pred f = S.parseMany (PR.wordBy pred f)

Grouping

groupBy :: Monad m => (a -> a -> Bool) -> Fold m a b -> Parser a m b Source #

Given an input stream [a,b,c,...] and a comparison function cmp, the parser assigns the element a to the first group, then if a `cmp` b is True b is also assigned to the same group. If a `cmp` c is True then c is also assigned to the same group and so on. When the comparison fails the parser is terminated. Each group is folded using the Fold f and the result of the fold is the result of the parser.

  • Stops - when the comparison fails.
  • Fails - never.
>>> :{
 runGroupsBy eq =
     Stream.fold Fold.toList
         . Stream.parseMany (Parser.groupBy eq Fold.toList)
         . Stream.fromList
:}
>>> runGroupsBy (<) []
[]
>>> runGroupsBy (<) [1]
[Right [1]]
>>> runGroupsBy (<) [3, 5, 4, 1, 2, 0]
[Right [3,5,4],Right [1,2],Right [0]]

groupByRolling :: Monad m => (a -> a -> Bool) -> Fold m a b -> Parser a m b Source #

Unlike groupBy this combinator performs a rolling comparison of two successive elements in the input stream. Assuming the input stream is [a,b,c,...] and the comparison function is cmp, the parser first assigns the element a to the first group, then if a `cmp` b is True b is also assigned to the same group. If b `cmp` c is True then c is also assigned to the same group and so on. When the comparison fails the parser is terminated. Each group is folded using the Fold f and the result of the fold is the result of the parser.

  • Stops - when the comparison fails.
  • Fails - never.
>>> :{
 runGroupsByRolling eq =
     Stream.fold Fold.toList
         . Stream.parseMany (Parser.groupByRolling eq Fold.toList)
         . Stream.fromList
:}
>>> runGroupsByRolling (<) []
[]
>>> runGroupsByRolling (<) [1]
[Right [1]]
>>> runGroupsByRolling (<) [3, 5, 4, 1, 2, 0]
[Right [3,5],Right [4],Right [1,2],Right [0]]

Pre-release

groupByRollingEither :: Monad m => (a -> a -> Bool) -> Fold m a b -> Fold m a c -> Parser a m (Either b c) Source #

Like groupByRolling, but if the predicate is True then collects using the first fold as long as the predicate holds True, if the predicate is False collects using the second fold as long as it remains False. Returns Left for the first case and Right for the second case.

For example, if we want to detect sorted sequences in a stream, both ascending and descending cases we can use 'groupByRollingEither (<=) Fold.toList Fold.toList'.

Pre-release

Framing

wordWithQuotes Source #

Arguments

:: (Monad m, Eq a) 
=> Bool

Retain the quotes and escape chars in the output

-> (a -> a -> Maybe a)

quote char -> escaped char -> translated char

-> a

Matches an escape elem?

-> (a -> Maybe a)

If left quote, return right quote, else Nothing.

-> (a -> Bool)

Matches a word separator?

-> Fold m a b 
-> Parser a m b 

Quote and bracket aware word splitting with escaping. Like wordBy but word separators within specified quotes or brackets are ignored. Quotes and escape characters can be processed. If the end quote is different from the start quote it is called a bracket. The following quoting rules apply:

  • In an unquoted string a character may be preceded by an escape character. The escape character is removed and the character following it is treated literally with no special meaning e.g. e.g. h e l l o is a single word, n is same as n.
  • Any part of the word can be placed within quotes. Inside quotes all characters are treated literally with no special meaning. Quoting character itself cannot be used within quotes unless escape processing within quotes is applied to allow it.
  • Optionally escape processing for quoted part can be specified. Escape character has no special meaning inside quotes unless it is followed by a character that has a escape translation specified, in that case the escape character is removed, and the specified translation is applied to the character following it. This can be used to escape the quoting character itself within quotes.
  • There can be multiple quoting characters, when a quote starts, all other quoting characters within that quote lose any special meaning until the quote is closed.
  • A starting quote char without an ending char generates a parse error. An ending bracket char without a corresponding bracket begin is ignored.
  • Brackets can be nested.

We should note that unquoted and quoted escape processing are different. In unquoted part escape character is always removed. In quoted part it is removed only if followed by a special meaning character. This is consistent with how shell performs escape processing.

Splitting

many :: Monad m => Parser a m b -> Fold m b c -> Parser a m c Source #

Collect zero or more parses. Apply the supplied parser repeatedly on the input stream and push the parse results to a downstream fold.

Stops: when the downstream fold stops or the parser fails. Fails: never, produces zero or more results.

>>> many = Parser.countBetween 0 maxBound

Compare with many.

some :: Monad m => Parser a m b -> Fold m b c -> Parser a m c Source #

Collect one or more parses. Apply the supplied parser repeatedly on the input stream and push the parse results to a downstream fold.

Stops: when the downstream fold stops or the parser fails. Fails: if it stops without producing a single result.

>>> some p f = Parser.manyP p (Parser.takeGE 1 f)
>>> some = Parser.countBetween 1 maxBound

Compare with some.

manyTill :: Monad m => Parser a m b -> Parser a m x -> Fold m b c -> Parser a m c Source #

manyTill chunking test f tries the parser test on the input, if test fails it backtracks and tries chunking, after chunking succeeds test is tried again and so on. The parser stops when test succeeds. The output of test is discarded and the output of chunking is accumulated by the supplied fold. The parser fails if chunking fails.

Stops when the fold f stops.

De-interleaving

deintercalate :: Monad m => Parser a m x -> Parser a m y -> Fold m (Either x y) z -> Parser a m z Source #

Apply two parsers alternately to an input stream. The input stream is considered an interleaving of two patterns. The two parsers represent the two patterns. Parsing starts at the first parser and stops at the first parser. It can be used to parse a infix style pattern e.g. p1 p2 p1 . Empty input or single parse of the first parser is accepted.

>>> p1 = Parser.takeWhile1 (not . (== '+')) Fold.toList
>>> p2 = Parser.satisfy (== '+')
>>> p = Parser.deintercalate p1 p2 Fold.toList
>>> Stream.parse p $ Stream.fromList ""
Right []
>>> Stream.parse p $ Stream.fromList "1"
Right [Left "1"]
>>> Stream.parse p $ Stream.fromList "1+"
Right [Left "1"]
>>> Stream.parse p $ Stream.fromList "1+2+3"
Right [Left "1",Right '+',Left "2",Right '+',Left "3"]