allMap stores a list of tasks, KEY marks this task type, and Value corresponds to the parameters of the task, and now I need to process these tasks concurrently. The following two methods are used in the development process, the effect is not good, I feel that I do not understand the idea of golang concurrent processing; here are some of my experience and doubts, hope to get the guidance of the gods.
< H1 > method one < / H1 > // allMap
// Task
type Task struct {
Params interface{}
ResultChan chan []byte
// Wg *sync.WaitGroup
}
ParamsResultChanResultChan ;
//
for key, value := range allMap {
go func(k string, v interface{}) {
log.Debug("k : " , k )
if k == tools.REQUEST_BAOJIE {
// A
log.Debug("baojie elem len : ", len(value))
one_task = &service.Task{
Params: v,
ResultChan: make(chan []byte, len(value)),
//Wg : new(sync.WaitGroup) ,
}
// B
log.Debugf("1 one_task : %+v ", one_task)
// AddTaskone_taskone_taskResultChan;
service.AddTask(one_task)
} else if k == tools.REQUEST {
}
}(key, value)
}
// C
log.Debugf("2 one_task : %+v ", one_task)
//
go func() {
for item := range one_task.ResultChan {
log.Debug("Receive data From ResultChan : ", string(item))
}
log.Debug("Process ", tools.REQUEST_BAOJIE, " end ")
}()
the disadvantage of this method depends too much on the sequence of program execution. In the course of testing, it is found that when C occurs before An and B, it will make the receiving result goroutinue access ResultChan members run down, because there is no room for ResultChan to apply at this time.
solution 1: the
service.AddTask (one_task) function adds another parameter, chan <-interface {}, after AddTask processing, the result is written to this channel, and the receiving result co-program listens to the channel, and then reads the result.
delay the timing of concurrency
for k, v := range allMap {
//go func(k string, v interface{}) {
log.Debug("k : ", k)
if k == tools.REQUEST {
// A
log.Debug("baojie elem len : ", len(v))
one_task = &service.Task{
Params: v,
ResultChan: make(chan []byte, len(v)),
//Wg : new(sync.WaitGroup) ,
}
// B
log.Debugf("1 one_task : %+v ", one_task)
go service.AddTask(one_task)
} else if k == tools.REQUEST_TCP {
}
//}(key, value)
}
// C
log.Debugf("2 one_task : %+v ", one_task)
//
go func() {
for item := range one_task.ResultChan {
log.Debug("Receive data From ResultChan : ", string(item))
}
log.Debug("Process ", tools.REQUEST_BAOJIE, " end ")
}()
this ensures that C must occur after An and B. in this way, the ResultChan must be initialized first, and the receiving result will be read out after waiting for the AddTask to write data into it.
< H2 > question 1 < / H2 > Thequestion arises, since there is a problem with mode 1, is there any drawback in efficiency in mode 2?
is there a problem with my concurrent logic?
< H2 > question 2 < / H2 >whether this idea is desirable
var task Task ;
//
for key , value := range allMap{
task := Task{
params : value ,
result : make(chan interface{} , len(value) ) , // value list
}
go processOneByOne(key ,value) // len(allmap)
}
//
for result := range task.result {
// get result from chann
// to do
}
``
-sharp-sharp 3
chan,processOneByOnechanchannchann
:
demo.go
func TodoWork () {
go func(){
for key ,value := range allMap{
processOneByOne(key , value )
}
}()
for item := range task.ResultChan {
// itemkey valueKEYvalue
// TodoWork
println(item)
}
}
task.go
var (
)ResultChan chan interface{}
)
func init () {
ResultChan = make( chan interface{} , 100 )
}
func processOneByOne (key string, value interface {}) {
//
// ....
//
// ResultChan goroutine
ResultChan <- "Hello World"
}
-sharp-sharp-sharp
-sharp-sharp-sharp
-sharp-sharp-sharp
//
-sharp-sharp-sharp
-sharp-sharp-sharp
-sharp-sharp-sharp
-sharp-sharp-sharp
//
-sharp-sharp-sharp
-sharp-sharp-sharp
-sharp-sharp-sharp
-sharp-sharp-sharp
//
-sharp-sharp-sharp