简体   繁体   English

使用Boost.Spirit.Lex和流迭代器

[英]Using Boost.Spirit.Lex and stream iterators

I want use Boost.Spirit.Lex to lex a binary file; 我想使用Boost.Spirit.Lex来编译一个二进制文件; for this purpose I wrote the following program (here is an extract): 为此,我编写了以下程序(以下是摘录):

#include <boost/spirit/include/lex_lexertl.hpp>
#include <boost/spirit/include/support_multi_pass.hpp>
#include <boost/bind.hpp>
#include <boost/ref.hpp>
#include <fstream>
#include <iterator>
#include <string>

namespace spirit = boost::spirit;
namespace lex = spirit::lex;

#define X 1
#define Y 2
#define Z 3

template<typename L>
class word_count_tokens : public lex::lexer<L>
{
    public:
        word_count_tokens () {
            this->self.add
                ("[^ \t\n]+", X)
                ("\n", Y)
                (".", Z);
        }
};

class counter
{
    public:
        typedef bool result_type;

        template<typename T>
        bool operator () (const T &t, size_t &c, size_t &w, size_t &l) const {
            switch (t.id ()) {
               case X:
                   ++w; c += t.value ().size ();
                    break;
               case Y:
                   ++l; ++c;
                    break;
                case Z:
                    ++c;
                    break;
            }

            return true;
        }
};

int main (int argc, char **argv)
{
    std::ifstream ifs (argv[1], std::ios::in | std::ios::binary);
    auto first = spirit::make_default_multi_pass (std::istream_iterator<char> (ifs));
    auto last = spirit::make_default_multi_pass (std::istream_iterator<char> ());
    size_t w, c, l;
    word_count_tokens<lex::lexertl::lexer<>> word_count_functor;

    w = c = l = 0;

    bool r = lex::tokenize (first, last, word_count_functor, boost::bind (counter (), _1, boost::ref (c), boost::ref (w), boost::ref (l)));

    ifs.close ();

    if (r) {
        std::cout << l << ", " << w << ", " << c << std::endl;
    }

    return 0;
}

The build returns the following error: 生成返回以下错误:

lexer.hpp:390:46: error: non-const lvalue reference to type 'const char *' cannot bind to a value of unrelated type

Now, the error is due to definition of concrete lexer, lex::lexer<> ; 现在,该错误归因于具体词法分析器lex::lexer<> in fact its first parameter is defaulted to const char * . 实际上,它的第一个参数默认为const char * I obtain the same error also if I use spirit::istream_iterator or spirit::make_default_multi_pass (.....) . 如果我使用spirit::istream_iteratorspirit::make_default_multi_pass (.....)也会出现相同的错误。
But if I specify the correct template parameters of lex::lexer<> I obtain a plethora of errors! 但是,如果我指定了lex::lexer<>的正确模板参数,则会得到很多错误!

Solutions? 解决方案?

Update 更新资料

I have putted all source file; 我把所有源文件都放了; it's the word_counter site's example. 这是word_counter网站的示例。

I think the real problem is not shown. 我认为没有显示真正的问题。 You don't show first or last and I have a feeling you might have temporaries there. 您没有firstlast ,我觉得您那里可能有临时工。

Here's a sample I came up with to verify, perhaps you can see what it is you're doing ---wrong--- differently :) 这是我提出来验证的示例,也许您可​​以看到您正在做什么---错误---有所不同:)

#include <boost/spirit/include/lex_lexertl.hpp>
#include <boost/spirit/include/qi.hpp>
#include <fstream>
#ifdef MEMORY_MAPPED
#   include <boost/iostreams/device/mapped_file.hpp>
#endif

namespace /*anon*/
{
    namespace qi =boost::spirit::qi;
    namespace lex=boost::spirit::lex;

    template <typename Lexer>
        struct mylexer_t : lex::lexer<Lexer>
    {
        mylexer_t()
        {
            fileheader = "hello";

            this->self = fileheader
                | space [ lex::_pass = lex::pass_flags::pass_ignore ];
        }

        lex::token_def<lex::omit>
            fileheader, space;
    };

    template <typename Iterator> struct my_grammar_t
        : public qi::grammar<Iterator>
    {
        template <typename TokenDef>
            my_grammar_t(TokenDef const& tok) 
                : my_grammar_t::base_type(header)
        {
            header = tok.fileheader;
            BOOST_SPIRIT_DEBUG_NODE(header);
        }

      private:
        qi::rule<Iterator> header;
    };
}

namespace /* */ {

    std::string safechar(char ch) {
        switch (ch) {
            case '\t': return "\\t"; break;
            case '\0': return "\\0"; break;
            case '\r': return "\\r"; break;
            case '\n': return "\\n"; break;
        }
        return std::string(1, ch); 
    }

    template <typename It>
        std::string showtoken(const boost::iterator_range<It>& range)
        {
            std::ostringstream oss;
            oss << '[';
            std::transform(range.begin(), range.end(), std::ostream_iterator<std::string>(oss), safechar);
            oss << ']';
            return oss.str();
        }
}

bool parsefile(const std::string& spec)
{
#ifdef MEMORY_MAPPED
    typedef char const* It;
    boost::iostreams::mapped_file mmap(spec.c_str(), boost::iostreams::mapped_file::readonly);
    char const *first = mmap.const_data();
    char const *last = first + mmap.size();
#else
    typedef char const* It;
    std::ifstream in(spec.c_str());
    in.unsetf(std::ios::skipws);

    std::string v(std::istreambuf_iterator<char>(in.rdbuf()), std::istreambuf_iterator<char>());
    It first = &v[0];
    It last = first+v.size();
#endif

    typedef lex::lexertl::token<It  /*, boost::mpl::vector<char, unsigned int, std::string> */> token_type;
    typedef lex::lexertl::actor_lexer<token_type> lexer_type;

    typedef mylexer_t<lexer_type>::iterator_type iterator_type;
    try
    {
        static mylexer_t<lexer_type> mylexer;
        static my_grammar_t<iterator_type> parser(mylexer);

        auto iter = mylexer.begin(first, last);
        auto end  = mylexer.end();

        bool r = qi::parse(iter, end, parser);

        r = r && (iter == end);

        if (!r)
            std::cerr << spec << ": parsing failed at: \"" << std::string(first, last) << "\"\n";
        return r;
    }
    catch (const qi::expectation_failure<iterator_type>& e)
    {
        std::cerr << "FIXME: expected " << e.what_ << ", got '";
        for (auto it=e.first; it!=e.last; it++)
            std::cerr << showtoken(it->value());
        std::cerr << "'" << std::endl;
        return false;
    }
}

int main()
{
    if (parsefile("input.bin"))
        return 0;
    return 1;
}

For the variant: 对于变体:

typedef boost::spirit::istream_iterator It;
std::ifstream in(spec.c_str());
in.unsetf(std::ios::skipws);

It first(in), last;

Okay, since the question was changed, here's a new answer, addressing some points with the complete code sample. 好的,因为问题已更改,所以这是一个新的答案,其中包含完整代码示例的某些要点。

  1. Firstly, you need to use a custom token type. 首先,您需要使用自定义令牌类型。 Ie

     word_count_tokens<lex::lexertl::lexer<lex::lexertl::token<boost::spirit::istream_iterator>>> word_count_functor; // instead of: // word_count_tokens<lex::lexertl::lexer<>> word_count_functor; 

    Obviously, it's customary to typedef lex::lexertl::token<boost::spirit::istream_iterator> 显然,习惯于输入lex::lexertl::token<boost::spirit::istream_iterator>

  2. You need to use min_token_id instead of token IDs 1,2,3. 您需要使用min_token_id而不是令牌ID 1,2,3。 Also, make it an enum for ease of maintenance: 另外,使其成为枚举以简化维护:

     enum token_ids { X = lex::min_token_id + 1, Y, Z, }; 
  3. You can no longer just use .size() on the default token value() since the iterator range is not RandomAccessRange anymore. 您不能再对默认令牌value()使用.size()了,因为迭代器范围不再是RandomAccessRange了。 Instead, employ boost::distance() which is specialized for iterator_range : 相反,请使用专门用于iterator_range boost::distance()

      ++w; c += boost::distance(t.value()); // t.value ().size (); 

Combining these fixes: Live On Coliru 结合这些修复程序: Live On Coliru

#include <boost/spirit/include/lex_lexertl.hpp>
#include <boost/spirit/include/support_istream_iterator.hpp>
#include <boost/bind.hpp>
#include <fstream>

namespace spirit = boost::spirit;
namespace lex    = spirit::lex;

enum token_ids {
    X = lex::min_token_id + 1,
    Y,
    Z,
};

template<typename L>
class word_count_tokens : public lex::lexer<L>
{
    public:
        word_count_tokens () {
            this->self.add
                ("[^ \t\n]+", X)
                ("\n"       , Y)
                ("."        , Z);
        }
};

struct counter
{
    typedef bool result_type;

    template<typename T>
    bool operator () (const T &t, size_t &c, size_t &w, size_t &l) const {
        switch (t.id ()) {
            case X:
                ++w; c += boost::distance(t.value()); // t.value ().size ();
                break;
            case Y:
                ++l; ++c;
                break;
            case Z:
                ++c;
                break;
        }

        return true;
    }
};

int main (int argc, char **argv)
{
    std::ifstream ifs (argv[1], std::ios::in | std::ios::binary);
    ifs >> std::noskipws;
    boost::spirit::istream_iterator first(ifs), last;
    word_count_tokens<lex::lexertl::lexer<lex::lexertl::token<boost::spirit::istream_iterator>>> word_count_functor;

    size_t w = 0, c = 0, l = 0;
    bool r = lex::tokenize (first, last, word_count_functor, 
            boost::bind (counter (), _1, boost::ref (c), boost::ref (w), boost::ref (l)));

    ifs.close ();

    if (r) {
        std::cout << l << ", " << w << ", " << c << std::endl;
    }
}

When run on itself, prints 当自己运行时,打印

65, 183, 1665

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM