I currently have multiple goroutines of the same function that need to wait at some point in their execution for a result from a separate part of the program before continuing. My first thoughts would be to have a channel for each of the goroutines and then once we get the result, we iterate over all channels, write into them and close after?
How do I 'share' the result with the goroutines effectively/efficiently? Is the only way to write to their respective channels that they're listening/blocked on before moving on with the next part of their execution? Seems a bit excessive.
Thank you
Use channel close to coordinate multiple goroutines waiting on an event.
Here's an example. The printer
function represents the goroutines waiting on a result. The first argument is a channel that will be closed after the result is set. The second argument is a pointer to the result.
func printer(ready chan struct{}, result *string) {
<-ready
fmt.Println(*result)
}
Use it like this:
ready := make(chan struct{})
var result string
// Start the goroutines.
go printer(ready, &result)
go printer(ready, &result)
// Set the result and close the channel to signal that the value is ready.
result = "Hello World!"
close(ready)
This works because receive on a closed channel returns the zero value of the channel's type.
Instead of a channel to send the result to each goroutine, you can store the result to a shared variable, and use a condition variable to broadcast that the result is ready, so all goroutines can read it from the shared variable.
Something like this:
var resultVar *ResultType // resultVar = nil means result not ready
var c sync.Cond // initialize this before using it
func f() {
// compute result
c.L.Lock()
resultVar = result
c.L.Unlock()
c.Broadcast()
}
func goroutine() {
...
c.L.Lock()
for resultVar == nil {
c.Wait()
}
c.L.Unlock()
}
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.