简体   繁体   English

Flutter 使用 firestore 分页 stream

[英]Flutter pagination with firestore stream

How to properly implement pagination with firestore stream on flutter (in this case flutter web)?如何在 flutter(在本例中为 flutter web)上使用 firestore stream 正确实施分页? my current approach with bloc which is most likely wrong is like this我目前使用 bloc 的方法很可能是错误的,就像这样

function called on bloc when load next page, notice that i increased the lastPage variable of the state by 1 each time the function is called:加载下一页时 function 在 bloc 上调用,注意每次调用 function 时,我将 state 的 lastPage 变量增加 1:

Stream<JobPostingState> _loadNextPage() async* {
yield state.copyWith(isLoading: true);
try {
  service
      .getAllDataByClassPage(state.lastPage+1)
      .listen((List<Future<DataJob>> listDataJob) async {
    List<DataJob?> listData = [];
    await Future.forEach(listDataJob, (dynamic element) async {
      DataJob data= await element;
      listData.add(data);
    });
    bool isHasMoreData = state.listJobPostBlock.length!=listData.length;
    //Update data on state here
  });
} on Exception catch (e, s) {
  yield StateFailure(error: e.toString());
}}

function called to get the stream data调用function获取stream数据

Stream<List<Future<DataJob>>> getAllDataByClassPage(
      String className, int page) {
    Stream<QuerySnapshot> stream;
    if (className.isNotEmpty)
      stream = collection
          .orderBy('timestamp', "desc")
          .where('class', "==", className).limit(page*20)
          .onSnapshot;
    else
      stream = collection.onSnapshot;

    return stream.map((QuerySnapshot query) {
      return query.docs.map((e) async {
        return DataJob.fromMap(e.data());
      }).toList();
    });
  }

With this approach it works as intended where the data loaded increased when i load next page and still listening to the stream, but i dont know if this is proper approach since it replace the stream could it possibly read the data twice and end up making my read count on firestore much more than without using pagination.使用这种方法,它可以按预期工作,当我加载下一页并仍在收听 stream 时,加载的数据增加了,但我不知道这是否是正确的方法,因为它取代了 stream,它是否可能读取数据两次并最终使我的与不使用分页相比,Firestore 上的读取次数要多得多。 Any advice is really appreciated, thanks.非常感谢任何建议,谢谢。

Your approach is not very the best possible indeed, and as you scale you going to be more costly.您的方法确实不是最好的,并且随着您的扩展,您的成本会更高。 What I would do in your shoes would be to create a global variable that represents your stream so you can manipulate it.我会在你的鞋子里做的是创建一个代表你的流的全局变量,这样你就可以操纵它。 I can't see all of your code so I am going to be as generic as possible so you can apply this to your code.我看不到您的所有代码,因此我将尽可能通用,以便您可以将其应用于您的代码。

First let's declare the stream controller as a global variable that can hold the value of your stream:首先让我们将流控制器声明为可以保存流值的全局变量:

StreamController<List<DocumentSnapshot>> streamController = 
    StreamController<List<DocumentSnapshot>>();

After that we need to change your getAllDataByClassPage function to the following:之后,我们需要将您的getAllDataByClassPage函数更改为以下内容:

async getAllDataByClassPage(String className) {
    Stream stream = streamController.stream;
    //taking out of the code your className logic
    ...
    if(stream.isEmpty){
         QuerySnapshot snap = await collection.orderBy('timestamp', "desc")
                                              .where('class', "==", className)
                                              .limit(20) 
                                              .onSnapshot
         streamController.add(snap.docs);
    }else{
         DocumentSnapshot lastDoc = stream.last;
         QuerySnapshot snap = await collection.orderBy('timestamp', "desc")
                                              .where('class', "==", className)
                                              .startAfterDocument(lastDoc)
                                              .limit(20)
                                              .onSnapshot;
         streamController.add(snap.docs);
    }
  }

After that all you need to do in order to get the stream is invoke streamController.stream;之后,您需要做的就是调用streamController.stream;

NOTE : I did not test this code but this is the general ideal of what you should try to do.注意:我没有测试这段代码,但这是您应该尝试做的一般理想。

You can keep track of last document and if has more data on the list using startAfterDocument method.您可以使用startAfterDocument方法跟踪最后一个文档以及列表中是否有更多数据。 something like this像这样的

 final data = await db
      .collection(collection)
      .where(field, arrayContains: value)
      .limit(limit)
      .startAfterDocument(lastDoc)
      .get()
      .then((snapshots) => {
            'lastDoc': snapshots.docs[snapshots.size - 1],
            'docs': snapshots.docs.map((e) => e.data()).toList(),
            'hasMore': snapshots.docs.length == limit,
          });

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM