On my machine there are 4 logical processors. so there are four contexts P1
, P2
, P3
& P4
working with OS threads M1
, M2
, M3
& M4
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
In the below code:
package main
import (
"fmt"
"io/ioutil"
"net/http"
)
func getPage(url string) (int, error) {
resp, err := http.Get(url)
if err != nil {
return 0, err
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return 0, err
}
return len(body), nil
}
func worker(urlChan chan string, sizeChan chan<- string, i int) {
for {
url := <-urlChan
length, err := getPage(url)
if err == nil {
sizeChan <- fmt.Sprintf("%s has length %d (%d)", url, length, i)
} else {
sizeChan <- fmt.Sprintf("%s has error %s (%d)", url, err, i)
}
}
}
func main() {
urls := []string{"http://www.google.com/", "http://www.yahoo.com",
"http://www.bing.com", "http://bbc.co.uk", "http://www.ndtv.com", "https://www.cnn.com/"}
urlChan := make(chan string)
sizeChan := make(chan string)
for i := 0; i < len(urls); i++ {
go worker(urlChan, sizeChan, i)
}
for _, url := range urls {
urlChan <- url
}
for i := 0; i < len(urls); i++ {
fmt.Printf("%s\n", <-sizeChan)
}
}
there are six go-routines that perform http.Get()
1)
Does OS thread( M1
) get blocked with go-routine( G1
) on io( http.Get()
)? on context P1
or
Does Go scheduler pre-empt go-routine( G1
) from OS thread( M1
) upon http.Get()
? and assign G2
to M1
... if yes, on pre-emption of G1
, how G1
is managed by Goruntime to resume G1
upon completion of IO( http.Get
)?
2)
What is the api to retrieve context number(P) used for each go-routine(G)? for debugging purpose..
3) we maintain critical section using counted semaphore for above reader writer problem using C pthreads library. Why are we not getting into the usage of critical sections using go-routines and channels?
No, it doesn't block. My rough (and unsourced, I picked it up through osmosis) understanding is that whenever a goroutine wants to perform a "blocking" I/O that has an equivalent non-blocking version,
select
loop (or poll
or whatever equivalent is available) waiting for such operations to unblock, andWhen the I/O operation unblocks, the select-loop looks in the table to figure out which goroutine was interested in the result, and schedules it to be run. In this way, goroutines waiting for I/O do not occupy an OS thread.
In case of I/O that can't be done non-blockingly, or any other blocking syscall, the goroutine executes the syscall through a runtime function that marks its thread as blocked, and the runtime will create a new OS thread for goroutines to be scheduled on. This maintains the ability to have GOMAXPROCS running (not blocked) goroutines. This doesn't cause very much thread bloat for most programs, since the most common syscalls for dealing with files, sockets, etc. have been made async-friendly. (Thanks to @JimB for reminding me of this, and the authors of the helpful linked answers.)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.