简体   繁体   中英

declarative loop vs imperative loop

I am trying to switch my programming style to declarative from imperative , but there is some concept that is bugging me like the performance when it comes to the loop . For example, I have an original DATA , and after manipulating it I wish to get 3 expected outcomes: itemsHash , namesHash , rangeItemsHash

// original data

const DATA = [
  {id: 1, name: 'Alan', date: '2021-01-01', age: 0},
  {id: 2, name: 'Ben', date: '1980-02-02', age: 41},
  {id: 3, name: 'Clara', date: '1959-03-03', age: 61},
]

...

// expected outcome

// itemsHash => {
//   1: {id: 1, name: 'Alan', date: '2021-01-01', age: 0},
//   2: {id: 2, name: 'Ben', date: '1980-02-02', age: 41},
//   3: {id: 3, name: 'Clara', date: '1959-03-03', age: 61},
// }

// namesHash => {1: 'Alan', 2: 'Ben', 3: 'Clara'}

// rangeItemsHash => {
//   minor: [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}],
//   junior: [{id: 2, name: 'Ben', date: '1980-02-02', age: 41}],
//   senior: [{id: 3, name: 'Clara', date: '1959-03-03', age: 61}],
// }
// imperative way

const itemsHash = {}
const namesHash = {}
const rangeItemsHash = {}

DATA.forEach(person => {
  itemsHash[person.id] = person;
  namesHash[person.id] = person.name;
  if (person.age > 60){
    if (typeof rangeItemsHash['senior'] === 'undefined'){
      rangeItemsHash['senior'] = []
    }
    rangeItemsHash['senior'].push(person)
  }
  else if (person.age > 21){
    if (typeof rangeItemsHash['junior'] === 'undefined'){
      rangeItemsHash['junior'] = []
    }
    rangeItemsHash['junior'].push(person)
  }
  else {
    if (typeof rangeItemsHash['minor'] === 'undefined'){
      rangeItemsHash['minor'] = []
    }
    rangeItemsHash['minor'].push(person)
  }
})
// declarative way

const itemsHash = R.indexBy(R.prop('id'))(DATA);
const namesHash = R.compose(R.map(R.prop('name')),R.indexBy(R.prop('id')))(DATA);

const gt21 = R.gt(R.__, 21);
const lt60 = R.lte(R.__, 60);
const isMinor = R.lt(R.__, 21);
const isJunior = R.both(gt21, lt60);
const isSenior = R.gt(R.__, 60);


const groups = {minor: isMinor, junior: isJunior, senior: isSenior };

const rangeItemsHash = R.map((method => R.filter(R.compose(method, R.prop('age')))(DATA)))(groups)

To achieve the expected outcome, imperative only loops once while declarative loops at least 3 times( itemsHash , namesHash , rangeItemsHash ) . Which one is better? Is there any trade-off on performance?

Similar how to .map(f).map(g) ==.map(compose(g, f)) , you can compose reducers to ensure a single pass gives you all results.

Writing declarative code does not really have anything to do with the decision to loop once or multiple times.

 // Reducer logic for all 3 values you're interested in // id: person const idIndexReducer = (idIndex, p) => ({...idIndex, [p.id]: p }); // id: name const idNameIndexReducer = (idNameIndex, p) => ({...idNameIndex, [p.id]: p.name }); // Age const ageLabel = ({ age }) => age > 60? "senior": age > 40? "medior": "junior"; const ageGroupReducer = (ageGroups, p) => { const ageKey = ageLabel(p); return {...ageGroups, [ageKey]: (ageGroups[ageKey] || []).concat(p) } } // Combine the reducers const seed = { idIndex: {}, idNameIndex: {}, ageGroups: {} }; const reducer = ({ idIndex, idNameIndex, ageGroups }, p) => ({ idIndex: idIndexReducer(idIndex, p), idNameIndex: idNameIndexReducer(idNameIndex, p), ageGroups: ageGroupReducer(ageGroups, p) }) const DATA = [ {id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}, ] // Loop once console.log( JSON.stringify(DATA.reduce(reducer, seed), null, 2) );

Subjective part: Whether it's worth it? I don't think so. I like simple code, and in my own experience going from 1 to 3 loops when working with limited data sets usually is unnoticeable.

So, if using Ramda, I'd stick to:

 const { prop, indexBy, map, groupBy, pipe } = R; const DATA = [ {id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}, ]; const byId = indexBy(prop("id"), DATA); const nameById = map(prop("name"), byId); const ageGroups = groupBy( pipe( prop("age"), age => age > 60? "senior": age > 40? "medior": "junior" ), DATA ); console.log(JSON.stringify({ byId, nameById, ageGroups }, null, 2))
 <script src="https://cdn.jsdelivr.net/npm/ramda@0.27.1/dist/ramda.min.js"></script>

I have several responses to this.

First, have you tested to know that performance is a problem? Far too much performance work is done on code that is not even close to being a bottleneck in an application. This often happens at the expense of code simplicity and clarity. So my usual rule is to write the simple and obvious code first, trying not to be stupid about performance, but never worrying overmuch about it. Then, if my application is unacceptably slow, benchmark it to find what parts are causing the largest issues, then optimize those. I've rarely had those places be the equivalent of looping three times rather than one. But of course it could happen.

If it does, and you really need to do this in a single loop, it's not terribly difficult to do this on top of a reduce call. We could write something like this:

 // helper function const ageGroup = ({age}) => age > 60? 'senior': age > 21? 'junior': 'minor' // main function const convert = (people) => people.reduce (({itemsHash, namesHash, rangeItemsHash}, person, _, __, group = ageGroup (person)) => ({ itemsHash: {...itemsHash, [person.id]: person}, namesHash: {...namesHash, [person.id]: person.name}, rangeItemsHash: {...rangeItemsHash, [group]: [...(rangeItemsHash [group] || []), person]} }), {itemsHash: {}, namesHash: {}, rangeItemsHash: {}}) // sample data const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}] // demo console.log (JSON.stringify ( convert (data), null, 4))
 .as-console-wrapper {max-height: 100%;important: top: 0}

(You can remove the JSON.stringify call to demonstrate that the references are shared between the various output hashes.)

There are two directions I might go from here to clean up this code.

The first would be to use Ramda. It has some functions that would help simplify a few things here. Using R.reduce , we could eliminate the annoying placeholder parameters that I use to allow me to add the default parameter group to the reduce signature, and maintain expressions-over-statements style coding. (We could alternatively do something with R.call .) And using evolve together with functions like assoc and over , we can make this more declarative like this:

 // helper function const ageGroup = ({age}) => age > 60? 'senior': age > 21? 'junior': 'minor' // main function const convert = (people) => reduce ( (acc, person, group = ageGroup (person)) => evolve ({ itemsHash: assoc (person.id, person), namesHash: assoc (person.id, person.name), rangeItemsHash: over (lensProp (group), append (person)) }) (acc), {itemsHash: {}, namesHash: {}, rangeItemsHash: {minor: [], junior: [], senior: []}}, people ) // sample data const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}] // demo console.log (JSON.stringify ( convert (data), null, 4))
 .as-console-wrapper {max-height: 100%;important: top: 0}
 <script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.js"></script> <script> const {reduce, evolve, assoc, over, lensProp, append} = R </script>

A slight downside to this version over the previous one is the need to predefine the categories senior , junior , and minor in the accumulator. We could certainly write an alternative to lensProp that somehow deals with default values, but that would take us further afield.

The other direction I might go is to note that there is still one potentially serious performance problem in the code, one Rich Snapp called the reduce ({...spread}) anti-pattern . To solve that, we might want to mutate our accumulator object in the reduce callback. Ramda -- by its very philosophic nature -- will not help you with this. But we can define some helper functions that will clean our code up at the same time we address this issue, with something like this:

 // utility functions const push = (x, xs) => ((xs.push (x)), x) const put = (k, v, o) => ((o[k] = v), o) const appendTo = (k, v, o) => put (k, push (v, o[k] || []), o) // helper function const ageGroup = ({age}) => age > 60? 'senior': age > 21? 'junior': 'minor' // main function const convert = (people) => people.reduce (({itemsHash, namesHash, rangeItemsHash}, person, _, __, group = ageGroup(person)) => ({ itemsHash: put (person.id, person, itemsHash), namesHash: put (person.id, person.name, namesHash), rangeItemsHash: appendTo (group, person, rangeItemsHash) }), {itemsHash: {}, namesHash: {}, rangeItemsHash: {}}) // sample data const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}] // demo console.log (JSON.stringify ( convert (data), null, 4))
 .as-console-wrapper {max-height: 100%;important: top: 0}

But in the end, as already suggested, I would not do this unless performance was provably a problem. I think it's much nicer with Ramda code like this:

 const ageGroup = ({age}) => age > 60? 'senior': age > 21? 'junior': 'minor' const convert = applySpec ({ itemsHash: indexBy (prop ('id')), nameHash: compose (fromPairs, map (props (['id', 'name']))), rangeItemsHash: groupBy (ageGroup) }) const data = [{id: 1, name: 'Alan', date: '2021-01-01', age: 0}, {id: 2, name: 'Ben', date: '1980-02-02', age: 41}, {id: 3, name: 'Clara', date: '1959-03-03', age: 61}] console.log (JSON.stringify( convert (data), null, 4))
 .as-console-wrapper {max-height: 100%;important: top: 0}
 <script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.js"></script> <script> const {applySpec, indexBy, prop, compose, fromPairs, map, props, groupBy} = R </script>

Here we might want -- for consistency's sake -- to make ageGroup point-free and/or inline it in the main function. That's not hard, and another answer gave an example of that. I personally find it more readable like this. (There's also probably a cleaner version of namesHash , but I'm out of time.)

This version loops three times, exactly what you are worried about. There are times when that might be a problem. But I wouldn't spend much effort on that unless it's a demonstrable problem. Clean code is a useful goal on its own.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM