简体   繁体   English

使用解析器之类的东西使用 Hasql 语句输出

[英]Consuming Hasql statement outputs using something like a parser

I have an application which models a data domain using some deeply nested record structures.我有一个应用程序,它使用一些深度嵌套的记录结构对数据域进行建模。 A contrived but analogous example would be something like:一个人为但类似的例子是这样的:

Book
  - Genre
  - Author
    - Hometown
      - Country

I've found that when writing queries using Hasql (or Hasql-TH to be more precise), I end up with this enormous function which takes a huge tuple and constructs my record by effectively consuming this tuple tail-first and constructing these nested record types, before finally putting it all together in one big type (including transforming some of the raw values etc.).我发现当使用 Hasql(或者更精确的 Hasql-TH)编写查询时,我最终得到了这个巨大的函数,它需要一个巨大的元组并通过有效地使用这个元组尾部优先并构造这些嵌套记录来构造我的记录类型,最后将它们放在一个大类型中(包括转换一些原始值等)。 It ends up looking something like this:它最终看起来像这样:

bookDetailStatement :: Statement BookID (Maybe Book)
bookDetailStatement = dimap
  (\ (BookID a) -> a)    -- extract the actual ID from the container
  (fmap mkBook)          -- process the record if it exists
  [maybeStatement|
    select
      (some stuff)
    from books
    join genres on (...)
    join authors on (...)
    join towns on (...)
    join countries on (...)
    where books.id = $1 :: int4
    limit 1
  |]

mkBook (
  -- Book
  book_id, book_title, ...
  -- Genre
  genre_name, ...
  -- Author
  author_id, author_name, ...
  -- Town
  town_name, town_coords, ...
  -- Country
  country_name, ...
) = let {- some data processing -} in Book {..}

This has been a bit annoying to write and to maintain / refactor, and I was thinking about trying to remodel it using Control.Applicative .编写和维护/重构有点烦人,我正在考虑尝试使用Control.Applicative对其进行重构。 That got me thinking that this is essentially a type of parser (a bit like Megaparsec) where we are consuming an input stream and then want to compose parsing functions which take some "tokens" from that stream and return results wrapped in the Parsing Functor (which really should be a Monad I think).这让我想到这本质上是一种解析器(有点像 Megaparsec),我们正在使用输入流,然后想要组合解析函数,这些函数从该流中获取一些“令牌”并返回包装在解析函子中的结果(我认为这真的应该是一个 Monad)。 The only difference is that, since these results are nested, they also need to consume the outputs of previous parsers (although actually you can do this with Megaparsec too, and with Control.Applicative ).唯一的区别是,由于这些结果是嵌套的,因此它们还需要使用先前解析器的输出(尽管实际上您也可以使用 Megaparsec 和Control.Applicative来做到这一点)。 This would allow smaller functions mkCountry , mkTown , mkAuthor , etc. which could be composed with <*> and <$> .这将允许更小的函数mkCountrymkTownmkAuthor等,它们可以由<*><$>

So, my question is basically twofold: (1) is this a reasonable (or even common) approach to real-world applications of this kind, or am I missing some sort of obvious optimisation which would allow this code to be more composable;所以,我的问题基本上是双重的:(1)对于这种现实世界的应用程序,这是一种合理(甚至常见)的方法,还是我错过了某种明显的优化,可以让这段代码更具组合性; (2) if I were to implement this, is a good route to adapt Megaparsec to the job (basically writing a tokeniser for the query result I think), or would it be simpler to write a data type to contain the query result and output value and then add the Monad and Applicative instance definition? (2) 如果我要实现这一点,是让 Megaparsec 适应工作的好方法(基本上我认为是为查询结果编写一个标记器),或者编写一个数据类型来包含查询结果和输出会更简单value 然后添加MonadApplicative实例定义?

If I understand you correctly your question is about constructing the mkBook mapping function by composing from smaller pieces.如果我理解正确,您的问题是关于通过由较小的部分组成来构建mkBook映射函数。

What does that function do?这个函数有什么作用? It maps data from denormalised form (a tuple of all produced fields) to your domain-specific structure consisting of other structures.它将数据从非规范化形式(所有生成字段的元组)映射到由其他结构组成的特定于域的结构。 It is a very basic pure function, where you just move data around based on your domain logic.这是一个非常基本的纯函数,您只需根据域逻辑移动数据即可。 So the problem sounds like a domain problem.所以这个问题听起来像是一个域问题。 As such it is not general, but specific to the domain of your application and hence trying to abstract over it will likely result in neither a reusable abstraction or a simpler codebase.因此,它不是通用的,而是特定于您的应用程序域的,因此尝试对其进行抽象可能既不会导致可重用的抽象,也不会导致更简单的代码库。

If you discover patterns inside such functions, those are likely to be domain-specific as well.如果您发现此类函数中的模式,则这些模式也可能是特定于领域的。 I can advise nothing better than to just wrap them in other pure functions and to compose by simply calling them.我只能建议将它们包装在其他纯函数中并通过简单地调用它们来组合。 No need for applicatives or monads.不需要应用程序或 monad。

Concerning parsing libs and tokenisation I really don't see how it has anything to do with the discussed problem, but I may be missing your point.关于解析库和标记化,我真的不明白它与讨论的问题有什么关系,但我可能没有理解你的观点。 Also I don't recommend bringing lenses in to solve such a trivial problem, you'll likely end up with a more complicated and less maintainable solution.此外,我不建议使用镜头来解决这样一个微不足道的问题,您最终可能会得到一个更复杂且更不易维护的解决方案。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何快速使用GDataXML或类似DOM解析器? - how to use GDataXML or something like DOM parser in swift? 如何在不消耗的情况下使Attoparsec解析器成功(如parsec lookAhead) - How do I make Attoparsec parser succeed without consuming (like parsec lookAhead) Python解析器输出无 - Python parser outputs None 使用解析器生成器,“ javadoc”看起来像吗? - 'javadoc' look-a-like, using parser generator? C Parser 异常重复输出 - C Parser repeating outputs unusually 与使用解析器组合器相比,使用像happy一样的解析器生成器有什么好处? - What is the advantage of using a parser generator like happy as opposed to using parser combinators? 简单的递归下降解析器? - Recursive Descent Parser for something simple? 使用基于ANTLR的PLSQL解析器解析Insert..Select语句 - Parse Insert..Select statement using ANTLR based PLSQL parser 使用 LPEG (Lua Parser Expression Grammars) 像 boost::spirit - Using LPEG (Lua Parser Expression Grammars) like boost::spirit 在编写类似Lua的语言的解析器时,如何判断函数调用是表达式还是语句 - How can I tell if a function call is an expression or a statement while writing a parser for a Lua-like language
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM