C++ multi-threading demo
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
4
down vote
favorite
I recently started learning C++ and here's a simple program I wrote to demonstrate how multi-threading work. This program simply fetches JSON strings from a whether API for a list of cities read from a text file.
Can you guys take a look? And I really appreciate professionals' views.
#include <iostream>
#include <ctime>
#include <typeinfo>
#include <thread>
#include <mutex>
#include <vector>
#include <fstream>
#include <string>
#include <curl/curl.h>
static size_t WriteCallback(void *contents, size_t size, size_t nmemb, void *userp);
void fetchJson(int);
void getCityList();
void fetch_weather_async();
const char *URL_BASE = "https://www.metaweather.com/api/location/search/?query=";
const char *CITY_LIST_FILE = "city_list.txt";
std::vector<std::string> cities;
int current_city = -1;
std::mutex city_counter_lock;
std::time_t start;
std::time_t finish;
int main(void)
getCityList();
std::cout << "Read " << cities.size() << " cities from file..." << std::endl;
std::cout << "Using 2 threads to fetch..." << std::endl;
std::thread thread_1(&fetch_weather_async);
std::thread thread_2(&fetch_weather_async);
start = std::time(nullptr);
thread_1.join();
thread_2.join();
finish = std::time(nullptr);
std::cout << "Duration " << (finish - start) << "s" << std::endl;
return 0;
void fetch_weather_async()
while (current_city + 1 < (int) cities.size())
city_counter_lock.lock();
int city_index = 0;
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index);
std::cout << "acquired " << city_index << std::endl;
void getCityList()
std::ifstream idFile(CITY_LIST_FILE);
std::string currentLine;
if(idFile.is_open())
while(! idFile.eof())
getline (idFile, currentLine);
cities.push_back(currentLine);
idFile.close();
void fetchJson(int i)
CURL *curl;
CURLcode res;
std::string readBuffer;
curl = curl_easy_init();
if(curl)
curl_easy_setopt(curl, CURLOPT_URL, (URL_BASE + cities[i]).c_str());
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
std::cout << i << " : " << readBuffer << std::endl;
static size_t WriteCallback(void *contents, size_t size, size_t nmemb, void *userp)
((std::string*)userp)->append((char*)contents, size * nmemb);
return size * nmemb;
The city list file (city_list.txt) contains:
Adelaide
Amsterdam
Ankara
Athens
Atlanta
Auckland
Baghdad
Bangkok
Beijing
Belfast
Berlin
Blackpool
Bridgeport
Brussels
Bucharest
Budapest
Busan
Cairo
Caracas
Cardiff
Casablanca
Charlotte
Copenhagen
Dallas
Damascus
Dhaka
Dublin
Edinburgh
Glasgow
Helsinki
Jakarta
Kiev
Kinshasa
Lagos
Lahore
Lima
Lisbon
London
Madrid
Manila
Milan
Minsk
Moscow
Nairobi
Newark
Newcastle
Oakland
Oslo
Paris
Philadelphia
Plymouth
Portland
Portsmouth
Prague
Pyongyang
Riyadh
Rome
Santander
Santiago
Santorini
Seoul
Singapore
Sofia
Stockholm
Sunderland
Taipei
Tokyo
Vienna
Warsaw
Wellington
Windhoek
Zagreb
c++ multithreading
add a comment |Â
up vote
4
down vote
favorite
I recently started learning C++ and here's a simple program I wrote to demonstrate how multi-threading work. This program simply fetches JSON strings from a whether API for a list of cities read from a text file.
Can you guys take a look? And I really appreciate professionals' views.
#include <iostream>
#include <ctime>
#include <typeinfo>
#include <thread>
#include <mutex>
#include <vector>
#include <fstream>
#include <string>
#include <curl/curl.h>
static size_t WriteCallback(void *contents, size_t size, size_t nmemb, void *userp);
void fetchJson(int);
void getCityList();
void fetch_weather_async();
const char *URL_BASE = "https://www.metaweather.com/api/location/search/?query=";
const char *CITY_LIST_FILE = "city_list.txt";
std::vector<std::string> cities;
int current_city = -1;
std::mutex city_counter_lock;
std::time_t start;
std::time_t finish;
int main(void)
getCityList();
std::cout << "Read " << cities.size() << " cities from file..." << std::endl;
std::cout << "Using 2 threads to fetch..." << std::endl;
std::thread thread_1(&fetch_weather_async);
std::thread thread_2(&fetch_weather_async);
start = std::time(nullptr);
thread_1.join();
thread_2.join();
finish = std::time(nullptr);
std::cout << "Duration " << (finish - start) << "s" << std::endl;
return 0;
void fetch_weather_async()
while (current_city + 1 < (int) cities.size())
city_counter_lock.lock();
int city_index = 0;
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index);
std::cout << "acquired " << city_index << std::endl;
void getCityList()
std::ifstream idFile(CITY_LIST_FILE);
std::string currentLine;
if(idFile.is_open())
while(! idFile.eof())
getline (idFile, currentLine);
cities.push_back(currentLine);
idFile.close();
void fetchJson(int i)
CURL *curl;
CURLcode res;
std::string readBuffer;
curl = curl_easy_init();
if(curl)
curl_easy_setopt(curl, CURLOPT_URL, (URL_BASE + cities[i]).c_str());
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
std::cout << i << " : " << readBuffer << std::endl;
static size_t WriteCallback(void *contents, size_t size, size_t nmemb, void *userp)
((std::string*)userp)->append((char*)contents, size * nmemb);
return size * nmemb;
The city list file (city_list.txt) contains:
Adelaide
Amsterdam
Ankara
Athens
Atlanta
Auckland
Baghdad
Bangkok
Beijing
Belfast
Berlin
Blackpool
Bridgeport
Brussels
Bucharest
Budapest
Busan
Cairo
Caracas
Cardiff
Casablanca
Charlotte
Copenhagen
Dallas
Damascus
Dhaka
Dublin
Edinburgh
Glasgow
Helsinki
Jakarta
Kiev
Kinshasa
Lagos
Lahore
Lima
Lisbon
London
Madrid
Manila
Milan
Minsk
Moscow
Nairobi
Newark
Newcastle
Oakland
Oslo
Paris
Philadelphia
Plymouth
Portland
Portsmouth
Prague
Pyongyang
Riyadh
Rome
Santander
Santiago
Santorini
Seoul
Singapore
Sofia
Stockholm
Sunderland
Taipei
Tokyo
Vienna
Warsaw
Wellington
Windhoek
Zagreb
c++ multithreading
1
Doescurl_easy_setopt
copy theCURLOPT_URL
? If it doesn't, the(...).c_str()
will get invalid after the line and you end up with a pointer to freed memory.
â Zeta
Jan 4 at 15:06
I problem never entered my mind. However, the method runs successfully again and again for all the URLs. So I guess the string is copied.
â Janith Jeewantha
Jan 4 at 15:54
1
@Zeta curl makes a copy of every char* that is passed in.
â ratchet freak
Jan 4 at 16:10
1
Of course curl will handle multiple simultaneous connections usingcurl multi handle
. This is a much more elegant way of doing it because you can easily handle thousands of network requests on a single thread.
â Martin York
Jan 4 at 18:11
add a comment |Â
up vote
4
down vote
favorite
up vote
4
down vote
favorite
I recently started learning C++ and here's a simple program I wrote to demonstrate how multi-threading work. This program simply fetches JSON strings from a whether API for a list of cities read from a text file.
Can you guys take a look? And I really appreciate professionals' views.
#include <iostream>
#include <ctime>
#include <typeinfo>
#include <thread>
#include <mutex>
#include <vector>
#include <fstream>
#include <string>
#include <curl/curl.h>
static size_t WriteCallback(void *contents, size_t size, size_t nmemb, void *userp);
void fetchJson(int);
void getCityList();
void fetch_weather_async();
const char *URL_BASE = "https://www.metaweather.com/api/location/search/?query=";
const char *CITY_LIST_FILE = "city_list.txt";
std::vector<std::string> cities;
int current_city = -1;
std::mutex city_counter_lock;
std::time_t start;
std::time_t finish;
int main(void)
getCityList();
std::cout << "Read " << cities.size() << " cities from file..." << std::endl;
std::cout << "Using 2 threads to fetch..." << std::endl;
std::thread thread_1(&fetch_weather_async);
std::thread thread_2(&fetch_weather_async);
start = std::time(nullptr);
thread_1.join();
thread_2.join();
finish = std::time(nullptr);
std::cout << "Duration " << (finish - start) << "s" << std::endl;
return 0;
void fetch_weather_async()
while (current_city + 1 < (int) cities.size())
city_counter_lock.lock();
int city_index = 0;
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index);
std::cout << "acquired " << city_index << std::endl;
void getCityList()
std::ifstream idFile(CITY_LIST_FILE);
std::string currentLine;
if(idFile.is_open())
while(! idFile.eof())
getline (idFile, currentLine);
cities.push_back(currentLine);
idFile.close();
void fetchJson(int i)
CURL *curl;
CURLcode res;
std::string readBuffer;
curl = curl_easy_init();
if(curl)
curl_easy_setopt(curl, CURLOPT_URL, (URL_BASE + cities[i]).c_str());
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
std::cout << i << " : " << readBuffer << std::endl;
static size_t WriteCallback(void *contents, size_t size, size_t nmemb, void *userp)
((std::string*)userp)->append((char*)contents, size * nmemb);
return size * nmemb;
The city list file (city_list.txt) contains:
Adelaide
Amsterdam
Ankara
Athens
Atlanta
Auckland
Baghdad
Bangkok
Beijing
Belfast
Berlin
Blackpool
Bridgeport
Brussels
Bucharest
Budapest
Busan
Cairo
Caracas
Cardiff
Casablanca
Charlotte
Copenhagen
Dallas
Damascus
Dhaka
Dublin
Edinburgh
Glasgow
Helsinki
Jakarta
Kiev
Kinshasa
Lagos
Lahore
Lima
Lisbon
London
Madrid
Manila
Milan
Minsk
Moscow
Nairobi
Newark
Newcastle
Oakland
Oslo
Paris
Philadelphia
Plymouth
Portland
Portsmouth
Prague
Pyongyang
Riyadh
Rome
Santander
Santiago
Santorini
Seoul
Singapore
Sofia
Stockholm
Sunderland
Taipei
Tokyo
Vienna
Warsaw
Wellington
Windhoek
Zagreb
c++ multithreading
I recently started learning C++ and here's a simple program I wrote to demonstrate how multi-threading work. This program simply fetches JSON strings from a whether API for a list of cities read from a text file.
Can you guys take a look? And I really appreciate professionals' views.
#include <iostream>
#include <ctime>
#include <typeinfo>
#include <thread>
#include <mutex>
#include <vector>
#include <fstream>
#include <string>
#include <curl/curl.h>
static size_t WriteCallback(void *contents, size_t size, size_t nmemb, void *userp);
void fetchJson(int);
void getCityList();
void fetch_weather_async();
const char *URL_BASE = "https://www.metaweather.com/api/location/search/?query=";
const char *CITY_LIST_FILE = "city_list.txt";
std::vector<std::string> cities;
int current_city = -1;
std::mutex city_counter_lock;
std::time_t start;
std::time_t finish;
int main(void)
getCityList();
std::cout << "Read " << cities.size() << " cities from file..." << std::endl;
std::cout << "Using 2 threads to fetch..." << std::endl;
std::thread thread_1(&fetch_weather_async);
std::thread thread_2(&fetch_weather_async);
start = std::time(nullptr);
thread_1.join();
thread_2.join();
finish = std::time(nullptr);
std::cout << "Duration " << (finish - start) << "s" << std::endl;
return 0;
void fetch_weather_async()
while (current_city + 1 < (int) cities.size())
city_counter_lock.lock();
int city_index = 0;
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index);
std::cout << "acquired " << city_index << std::endl;
void getCityList()
std::ifstream idFile(CITY_LIST_FILE);
std::string currentLine;
if(idFile.is_open())
while(! idFile.eof())
getline (idFile, currentLine);
cities.push_back(currentLine);
idFile.close();
void fetchJson(int i)
CURL *curl;
CURLcode res;
std::string readBuffer;
curl = curl_easy_init();
if(curl)
curl_easy_setopt(curl, CURLOPT_URL, (URL_BASE + cities[i]).c_str());
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
std::cout << i << " : " << readBuffer << std::endl;
static size_t WriteCallback(void *contents, size_t size, size_t nmemb, void *userp)
((std::string*)userp)->append((char*)contents, size * nmemb);
return size * nmemb;
The city list file (city_list.txt) contains:
Adelaide
Amsterdam
Ankara
Athens
Atlanta
Auckland
Baghdad
Bangkok
Beijing
Belfast
Berlin
Blackpool
Bridgeport
Brussels
Bucharest
Budapest
Busan
Cairo
Caracas
Cardiff
Casablanca
Charlotte
Copenhagen
Dallas
Damascus
Dhaka
Dublin
Edinburgh
Glasgow
Helsinki
Jakarta
Kiev
Kinshasa
Lagos
Lahore
Lima
Lisbon
London
Madrid
Manila
Milan
Minsk
Moscow
Nairobi
Newark
Newcastle
Oakland
Oslo
Paris
Philadelphia
Plymouth
Portland
Portsmouth
Prague
Pyongyang
Riyadh
Rome
Santander
Santiago
Santorini
Seoul
Singapore
Sofia
Stockholm
Sunderland
Taipei
Tokyo
Vienna
Warsaw
Wellington
Windhoek
Zagreb
c++ multithreading
asked Jan 4 at 12:07
Janith Jeewantha
234
234
1
Doescurl_easy_setopt
copy theCURLOPT_URL
? If it doesn't, the(...).c_str()
will get invalid after the line and you end up with a pointer to freed memory.
â Zeta
Jan 4 at 15:06
I problem never entered my mind. However, the method runs successfully again and again for all the URLs. So I guess the string is copied.
â Janith Jeewantha
Jan 4 at 15:54
1
@Zeta curl makes a copy of every char* that is passed in.
â ratchet freak
Jan 4 at 16:10
1
Of course curl will handle multiple simultaneous connections usingcurl multi handle
. This is a much more elegant way of doing it because you can easily handle thousands of network requests on a single thread.
â Martin York
Jan 4 at 18:11
add a comment |Â
1
Doescurl_easy_setopt
copy theCURLOPT_URL
? If it doesn't, the(...).c_str()
will get invalid after the line and you end up with a pointer to freed memory.
â Zeta
Jan 4 at 15:06
I problem never entered my mind. However, the method runs successfully again and again for all the URLs. So I guess the string is copied.
â Janith Jeewantha
Jan 4 at 15:54
1
@Zeta curl makes a copy of every char* that is passed in.
â ratchet freak
Jan 4 at 16:10
1
Of course curl will handle multiple simultaneous connections usingcurl multi handle
. This is a much more elegant way of doing it because you can easily handle thousands of network requests on a single thread.
â Martin York
Jan 4 at 18:11
1
1
Does
curl_easy_setopt
copy the CURLOPT_URL
? If it doesn't, the (...).c_str()
will get invalid after the line and you end up with a pointer to freed memory.â Zeta
Jan 4 at 15:06
Does
curl_easy_setopt
copy the CURLOPT_URL
? If it doesn't, the (...).c_str()
will get invalid after the line and you end up with a pointer to freed memory.â Zeta
Jan 4 at 15:06
I problem never entered my mind. However, the method runs successfully again and again for all the URLs. So I guess the string is copied.
â Janith Jeewantha
Jan 4 at 15:54
I problem never entered my mind. However, the method runs successfully again and again for all the URLs. So I guess the string is copied.
â Janith Jeewantha
Jan 4 at 15:54
1
1
@Zeta curl makes a copy of every char* that is passed in.
â ratchet freak
Jan 4 at 16:10
@Zeta curl makes a copy of every char* that is passed in.
â ratchet freak
Jan 4 at 16:10
1
1
Of course curl will handle multiple simultaneous connections using
curl multi handle
. This is a much more elegant way of doing it because you can easily handle thousands of network requests on a single thread.â Martin York
Jan 4 at 18:11
Of course curl will handle multiple simultaneous connections using
curl multi handle
. This is a much more elegant way of doing it because you can easily handle thousands of network requests on a single thread.â Martin York
Jan 4 at 18:11
add a comment |Â
3 Answers
3
active
oldest
votes
up vote
3
down vote
accepted
You have a threading bug here:
void fetch_weather_async()
while (current_city + 1 < (int) cities.size())
// Say there is only one city left to fetch the data from.
// And both threads have reached this lock at the same time
// (multi processor system). One will get the lock the other
// will stall waiting for the lock.
city_counter_lock.lock();
int city_index = 0;
// The first thread through the lock will satisfy this
// condition and get to processes the last city.
// The second thread through the lock (after the first
// releases it) will find this condition will fail and
// as a result have `city_index = 0`
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
// Both threads now execute the following code.
// But one is set to processes the last item,
// the other is going to re-processes the first item.
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index);
std::cout << "acquired " << city_index << std::endl;
// You have encountered your first race condition.
Your timing is not accurate:
std::thread thread_1(&fetch_weather_async);
std::thread thread_2(&fetch_weather_async);
// By the time you get the time here.
// The two threads could have actually finished.
// So you are just timing the cost of the join operation.
// --
// Or the threads could be half finished by the time you get
// here. So what you are timing here is indeterminate.
start = std::time(nullptr);
thread_1.join();
thread_2.join();
finish = std::time(nullptr);
Your biggest bottle neck is here:
city_counter_lock.lock();
int city_index = 0;
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
By simply removing this bottle neck I removed 5 seconds (averaged over three runs). Time to run was reduced from 18 to 13 seconds (timed externally). I simply made one thread do even cities and the other thread do odd cities (I know its not a perfect solution but I wanted do demonstrate the cost of threads being blocked by each other on such a coarse grain).
// I changed the declaration of `fetch_weather_async()`
void fetch_weather_async(bool);
// I changed how the the thread is called.
std::thread thread_1(fetch_weather_async, false);
std::thread thread_2(fetch_weather_async, true);
// Finally I simply changed the function
// so they started at different points.
void fetch_weather_async(bool odd)
unsigned int cityIndex = (odd ? 1 : 0);
for (;cityIndex < cities.size(); cityIndex += 2)
std::cout << "requesting " << cityIndex << std::endl;
fetchJson(cityIndex);
std::cout << "acquired " << cityIndex << std::endl;
Another Bug (but this one a normal bug)
// This is a classic anti pattern for reading a file.
// The trouble is the last read reads upto but not past
// the EOF. Thus even though there is no data left in the
// file to read the method eof() will return false.
// So you will enter the loop even though there is nothing
// to read.
while(! idFile.eof())
// The problem is here.
// If there is nothing left to read in the file this
// getline will fail. You should be checking the result
// of calling getline before performing the next action.
//
// I am not sure of the affect on `currentLine` when
// the call fails it is either made blank or left unchanged.
// So the last line in your list is either empty
// or a repeat of the last item in the file.
getline (idFile, currentLine);
cities.push_back(currentLine);
The correct way to use a loop to read a file is:
std::ifstream file("FileName");
std::string line;
// Notice we check the result of the `getline()` as part of
// the loop condition. So we only enter the loop body if the
// read happened correctly.
while(std::getline(file, line))
cities.push_back(line);
If all your cities are guaranteed to be one word. you can use a simpler trick.
std::vector<std::string> cities(std::istream_iterator<std::string>(file),
std::istream_iterator<std::string>());
Threading and Networking.
I would say that threading and networking is not the best way to demonstrate a good threading application.
The problem is that your thread spends an awful lot of its time blocked waiting for network to catch up and give it data. Usually what you do is have a single thread handling many connections simultaneously and use an even library to tell the thread when there is action on any of the sockets. I like libEvent for this it is one of the more modern ones.
But Curl has its own way of doing this with the "Multi Handle". You can create many curl handles and attach them to a multi handle then a single thread will watch all the handles simultaneously.
Curl Handle re-use.
Like Edward mentioned (but worth re-mentioning). Reusing the same curl handle is essential to efficiency when connecting to the same site.
Code Review
Lets stop using C-casts. They are hard to see and dangerous.
((std::string*)userp)->append((char*)contents, size * nmemb);
To make it obvious for code review please use the C++ casts. They emphasis how dangrerious they are and make them visable:
reinterpret_cast<std::string*>(userp)->append(static_cast<char*>(contents), size * nmemb);
add a comment |Â
up vote
4
down vote
Prefer using a lock guard:
int city_index = 0;
std::lock_guard<std::mutex> guard(city_counter_lock);
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
This makes sure that an exception will not leave the mutex locked;
You will need to handle failures to get the webpage. If res != CURLE_OK then a failure occurred somewhere.
You are encouraged to reuse the CURL
handle you can do so by creating it inside fetch_weather_async
before the loop and passing it into fetchJson
void fetch_weather_async()
CURL *curl;
curl = curl_easy_init();
if(!curl) return;
while (current_city + 1 < (int) cities.size())
int city_index = 0;
std::lock_guard<std::mutex> guard(city_counter_lock);
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index, curl);
std::cout << "acquired " << city_index << std::endl;
curl_easy_cleanup(curl);
void fetchJson(int i, CURL *curl)
CURLcode res;
std::string readBuffer;
curl_easy_setopt(curl, CURLOPT_URL, (URL_BASE + cities[i]).c_str());
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);
res = curl_easy_perform(curl);
if(res == CURLE_OK)
std::cout << i << " : " << readBuffer << std::endl;
1
It's absolutely vital to reuse the CURL handle. If you don't, it will open and close a new TLS session for every request which is very slow and wasteful of CPU cycles at both ends.
â Edward
Jan 4 at 15:12
Not reusing the CURL handle was a terrible mistake. I can't believe I did such a thing. Thank you for showing me.
â Janith Jeewantha
Jan 5 at 3:52
add a comment |Â
up vote
1
down vote
When you use threads, never never ever use globals. Ever.
Although it is a good suggestion, it would be great to expand on it to provide more context.
â Incomputable
Jan 5 at 12:33
1
The context is very general: no globals make following tasks easier: unit testing, code reuse, multiple instances of objects, porting, refactoring, adding threading (since everything is self contained you just need to worry about locking inside the file). I think all nonstatic nonconstant globals should be banned from C standard at least when -pedantic flag is used. We can do without them. Specially extern on variables should be deprecated.
â Martin
Jan 5 at 19:33
1
Comment is way to general. Good advice but I am sure we could come up with situations were global mutable state is required (synchronization). Rather than sayNot Non-Static Non-Constant
SayGlobal "Mutable" State
is a bad idea.
â Martin York
Jan 5 at 20:31
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
accepted
You have a threading bug here:
void fetch_weather_async()
while (current_city + 1 < (int) cities.size())
// Say there is only one city left to fetch the data from.
// And both threads have reached this lock at the same time
// (multi processor system). One will get the lock the other
// will stall waiting for the lock.
city_counter_lock.lock();
int city_index = 0;
// The first thread through the lock will satisfy this
// condition and get to processes the last city.
// The second thread through the lock (after the first
// releases it) will find this condition will fail and
// as a result have `city_index = 0`
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
// Both threads now execute the following code.
// But one is set to processes the last item,
// the other is going to re-processes the first item.
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index);
std::cout << "acquired " << city_index << std::endl;
// You have encountered your first race condition.
Your timing is not accurate:
std::thread thread_1(&fetch_weather_async);
std::thread thread_2(&fetch_weather_async);
// By the time you get the time here.
// The two threads could have actually finished.
// So you are just timing the cost of the join operation.
// --
// Or the threads could be half finished by the time you get
// here. So what you are timing here is indeterminate.
start = std::time(nullptr);
thread_1.join();
thread_2.join();
finish = std::time(nullptr);
Your biggest bottle neck is here:
city_counter_lock.lock();
int city_index = 0;
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
By simply removing this bottle neck I removed 5 seconds (averaged over three runs). Time to run was reduced from 18 to 13 seconds (timed externally). I simply made one thread do even cities and the other thread do odd cities (I know its not a perfect solution but I wanted do demonstrate the cost of threads being blocked by each other on such a coarse grain).
// I changed the declaration of `fetch_weather_async()`
void fetch_weather_async(bool);
// I changed how the the thread is called.
std::thread thread_1(fetch_weather_async, false);
std::thread thread_2(fetch_weather_async, true);
// Finally I simply changed the function
// so they started at different points.
void fetch_weather_async(bool odd)
unsigned int cityIndex = (odd ? 1 : 0);
for (;cityIndex < cities.size(); cityIndex += 2)
std::cout << "requesting " << cityIndex << std::endl;
fetchJson(cityIndex);
std::cout << "acquired " << cityIndex << std::endl;
Another Bug (but this one a normal bug)
// This is a classic anti pattern for reading a file.
// The trouble is the last read reads upto but not past
// the EOF. Thus even though there is no data left in the
// file to read the method eof() will return false.
// So you will enter the loop even though there is nothing
// to read.
while(! idFile.eof())
// The problem is here.
// If there is nothing left to read in the file this
// getline will fail. You should be checking the result
// of calling getline before performing the next action.
//
// I am not sure of the affect on `currentLine` when
// the call fails it is either made blank or left unchanged.
// So the last line in your list is either empty
// or a repeat of the last item in the file.
getline (idFile, currentLine);
cities.push_back(currentLine);
The correct way to use a loop to read a file is:
std::ifstream file("FileName");
std::string line;
// Notice we check the result of the `getline()` as part of
// the loop condition. So we only enter the loop body if the
// read happened correctly.
while(std::getline(file, line))
cities.push_back(line);
If all your cities are guaranteed to be one word. you can use a simpler trick.
std::vector<std::string> cities(std::istream_iterator<std::string>(file),
std::istream_iterator<std::string>());
Threading and Networking.
I would say that threading and networking is not the best way to demonstrate a good threading application.
The problem is that your thread spends an awful lot of its time blocked waiting for network to catch up and give it data. Usually what you do is have a single thread handling many connections simultaneously and use an even library to tell the thread when there is action on any of the sockets. I like libEvent for this it is one of the more modern ones.
But Curl has its own way of doing this with the "Multi Handle". You can create many curl handles and attach them to a multi handle then a single thread will watch all the handles simultaneously.
Curl Handle re-use.
Like Edward mentioned (but worth re-mentioning). Reusing the same curl handle is essential to efficiency when connecting to the same site.
Code Review
Lets stop using C-casts. They are hard to see and dangerous.
((std::string*)userp)->append((char*)contents, size * nmemb);
To make it obvious for code review please use the C++ casts. They emphasis how dangrerious they are and make them visable:
reinterpret_cast<std::string*>(userp)->append(static_cast<char*>(contents), size * nmemb);
add a comment |Â
up vote
3
down vote
accepted
You have a threading bug here:
void fetch_weather_async()
while (current_city + 1 < (int) cities.size())
// Say there is only one city left to fetch the data from.
// And both threads have reached this lock at the same time
// (multi processor system). One will get the lock the other
// will stall waiting for the lock.
city_counter_lock.lock();
int city_index = 0;
// The first thread through the lock will satisfy this
// condition and get to processes the last city.
// The second thread through the lock (after the first
// releases it) will find this condition will fail and
// as a result have `city_index = 0`
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
// Both threads now execute the following code.
// But one is set to processes the last item,
// the other is going to re-processes the first item.
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index);
std::cout << "acquired " << city_index << std::endl;
// You have encountered your first race condition.
Your timing is not accurate:
std::thread thread_1(&fetch_weather_async);
std::thread thread_2(&fetch_weather_async);
// By the time you get the time here.
// The two threads could have actually finished.
// So you are just timing the cost of the join operation.
// --
// Or the threads could be half finished by the time you get
// here. So what you are timing here is indeterminate.
start = std::time(nullptr);
thread_1.join();
thread_2.join();
finish = std::time(nullptr);
Your biggest bottle neck is here:
city_counter_lock.lock();
int city_index = 0;
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
By simply removing this bottle neck I removed 5 seconds (averaged over three runs). Time to run was reduced from 18 to 13 seconds (timed externally). I simply made one thread do even cities and the other thread do odd cities (I know its not a perfect solution but I wanted do demonstrate the cost of threads being blocked by each other on such a coarse grain).
// I changed the declaration of `fetch_weather_async()`
void fetch_weather_async(bool);
// I changed how the the thread is called.
std::thread thread_1(fetch_weather_async, false);
std::thread thread_2(fetch_weather_async, true);
// Finally I simply changed the function
// so they started at different points.
void fetch_weather_async(bool odd)
unsigned int cityIndex = (odd ? 1 : 0);
for (;cityIndex < cities.size(); cityIndex += 2)
std::cout << "requesting " << cityIndex << std::endl;
fetchJson(cityIndex);
std::cout << "acquired " << cityIndex << std::endl;
Another Bug (but this one a normal bug)
// This is a classic anti pattern for reading a file.
// The trouble is the last read reads upto but not past
// the EOF. Thus even though there is no data left in the
// file to read the method eof() will return false.
// So you will enter the loop even though there is nothing
// to read.
while(! idFile.eof())
// The problem is here.
// If there is nothing left to read in the file this
// getline will fail. You should be checking the result
// of calling getline before performing the next action.
//
// I am not sure of the affect on `currentLine` when
// the call fails it is either made blank or left unchanged.
// So the last line in your list is either empty
// or a repeat of the last item in the file.
getline (idFile, currentLine);
cities.push_back(currentLine);
The correct way to use a loop to read a file is:
std::ifstream file("FileName");
std::string line;
// Notice we check the result of the `getline()` as part of
// the loop condition. So we only enter the loop body if the
// read happened correctly.
while(std::getline(file, line))
cities.push_back(line);
If all your cities are guaranteed to be one word. you can use a simpler trick.
std::vector<std::string> cities(std::istream_iterator<std::string>(file),
std::istream_iterator<std::string>());
Threading and Networking.
I would say that threading and networking is not the best way to demonstrate a good threading application.
The problem is that your thread spends an awful lot of its time blocked waiting for network to catch up and give it data. Usually what you do is have a single thread handling many connections simultaneously and use an even library to tell the thread when there is action on any of the sockets. I like libEvent for this it is one of the more modern ones.
But Curl has its own way of doing this with the "Multi Handle". You can create many curl handles and attach them to a multi handle then a single thread will watch all the handles simultaneously.
Curl Handle re-use.
Like Edward mentioned (but worth re-mentioning). Reusing the same curl handle is essential to efficiency when connecting to the same site.
Code Review
Lets stop using C-casts. They are hard to see and dangerous.
((std::string*)userp)->append((char*)contents, size * nmemb);
To make it obvious for code review please use the C++ casts. They emphasis how dangrerious they are and make them visable:
reinterpret_cast<std::string*>(userp)->append(static_cast<char*>(contents), size * nmemb);
add a comment |Â
up vote
3
down vote
accepted
up vote
3
down vote
accepted
You have a threading bug here:
void fetch_weather_async()
while (current_city + 1 < (int) cities.size())
// Say there is only one city left to fetch the data from.
// And both threads have reached this lock at the same time
// (multi processor system). One will get the lock the other
// will stall waiting for the lock.
city_counter_lock.lock();
int city_index = 0;
// The first thread through the lock will satisfy this
// condition and get to processes the last city.
// The second thread through the lock (after the first
// releases it) will find this condition will fail and
// as a result have `city_index = 0`
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
// Both threads now execute the following code.
// But one is set to processes the last item,
// the other is going to re-processes the first item.
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index);
std::cout << "acquired " << city_index << std::endl;
// You have encountered your first race condition.
Your timing is not accurate:
std::thread thread_1(&fetch_weather_async);
std::thread thread_2(&fetch_weather_async);
// By the time you get the time here.
// The two threads could have actually finished.
// So you are just timing the cost of the join operation.
// --
// Or the threads could be half finished by the time you get
// here. So what you are timing here is indeterminate.
start = std::time(nullptr);
thread_1.join();
thread_2.join();
finish = std::time(nullptr);
Your biggest bottle neck is here:
city_counter_lock.lock();
int city_index = 0;
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
By simply removing this bottle neck I removed 5 seconds (averaged over three runs). Time to run was reduced from 18 to 13 seconds (timed externally). I simply made one thread do even cities and the other thread do odd cities (I know its not a perfect solution but I wanted do demonstrate the cost of threads being blocked by each other on such a coarse grain).
// I changed the declaration of `fetch_weather_async()`
void fetch_weather_async(bool);
// I changed how the the thread is called.
std::thread thread_1(fetch_weather_async, false);
std::thread thread_2(fetch_weather_async, true);
// Finally I simply changed the function
// so they started at different points.
void fetch_weather_async(bool odd)
unsigned int cityIndex = (odd ? 1 : 0);
for (;cityIndex < cities.size(); cityIndex += 2)
std::cout << "requesting " << cityIndex << std::endl;
fetchJson(cityIndex);
std::cout << "acquired " << cityIndex << std::endl;
Another Bug (but this one a normal bug)
// This is a classic anti pattern for reading a file.
// The trouble is the last read reads upto but not past
// the EOF. Thus even though there is no data left in the
// file to read the method eof() will return false.
// So you will enter the loop even though there is nothing
// to read.
while(! idFile.eof())
// The problem is here.
// If there is nothing left to read in the file this
// getline will fail. You should be checking the result
// of calling getline before performing the next action.
//
// I am not sure of the affect on `currentLine` when
// the call fails it is either made blank or left unchanged.
// So the last line in your list is either empty
// or a repeat of the last item in the file.
getline (idFile, currentLine);
cities.push_back(currentLine);
The correct way to use a loop to read a file is:
std::ifstream file("FileName");
std::string line;
// Notice we check the result of the `getline()` as part of
// the loop condition. So we only enter the loop body if the
// read happened correctly.
while(std::getline(file, line))
cities.push_back(line);
If all your cities are guaranteed to be one word. you can use a simpler trick.
std::vector<std::string> cities(std::istream_iterator<std::string>(file),
std::istream_iterator<std::string>());
Threading and Networking.
I would say that threading and networking is not the best way to demonstrate a good threading application.
The problem is that your thread spends an awful lot of its time blocked waiting for network to catch up and give it data. Usually what you do is have a single thread handling many connections simultaneously and use an even library to tell the thread when there is action on any of the sockets. I like libEvent for this it is one of the more modern ones.
But Curl has its own way of doing this with the "Multi Handle". You can create many curl handles and attach them to a multi handle then a single thread will watch all the handles simultaneously.
Curl Handle re-use.
Like Edward mentioned (but worth re-mentioning). Reusing the same curl handle is essential to efficiency when connecting to the same site.
Code Review
Lets stop using C-casts. They are hard to see and dangerous.
((std::string*)userp)->append((char*)contents, size * nmemb);
To make it obvious for code review please use the C++ casts. They emphasis how dangrerious they are and make them visable:
reinterpret_cast<std::string*>(userp)->append(static_cast<char*>(contents), size * nmemb);
You have a threading bug here:
void fetch_weather_async()
while (current_city + 1 < (int) cities.size())
// Say there is only one city left to fetch the data from.
// And both threads have reached this lock at the same time
// (multi processor system). One will get the lock the other
// will stall waiting for the lock.
city_counter_lock.lock();
int city_index = 0;
// The first thread through the lock will satisfy this
// condition and get to processes the last city.
// The second thread through the lock (after the first
// releases it) will find this condition will fail and
// as a result have `city_index = 0`
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
// Both threads now execute the following code.
// But one is set to processes the last item,
// the other is going to re-processes the first item.
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index);
std::cout << "acquired " << city_index << std::endl;
// You have encountered your first race condition.
Your timing is not accurate:
std::thread thread_1(&fetch_weather_async);
std::thread thread_2(&fetch_weather_async);
// By the time you get the time here.
// The two threads could have actually finished.
// So you are just timing the cost of the join operation.
// --
// Or the threads could be half finished by the time you get
// here. So what you are timing here is indeterminate.
start = std::time(nullptr);
thread_1.join();
thread_2.join();
finish = std::time(nullptr);
Your biggest bottle neck is here:
city_counter_lock.lock();
int city_index = 0;
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
city_counter_lock.unlock();
By simply removing this bottle neck I removed 5 seconds (averaged over three runs). Time to run was reduced from 18 to 13 seconds (timed externally). I simply made one thread do even cities and the other thread do odd cities (I know its not a perfect solution but I wanted do demonstrate the cost of threads being blocked by each other on such a coarse grain).
// I changed the declaration of `fetch_weather_async()`
void fetch_weather_async(bool);
// I changed how the the thread is called.
std::thread thread_1(fetch_weather_async, false);
std::thread thread_2(fetch_weather_async, true);
// Finally I simply changed the function
// so they started at different points.
void fetch_weather_async(bool odd)
unsigned int cityIndex = (odd ? 1 : 0);
for (;cityIndex < cities.size(); cityIndex += 2)
std::cout << "requesting " << cityIndex << std::endl;
fetchJson(cityIndex);
std::cout << "acquired " << cityIndex << std::endl;
Another Bug (but this one a normal bug)
// This is a classic anti pattern for reading a file.
// The trouble is the last read reads upto but not past
// the EOF. Thus even though there is no data left in the
// file to read the method eof() will return false.
// So you will enter the loop even though there is nothing
// to read.
while(! idFile.eof())
// The problem is here.
// If there is nothing left to read in the file this
// getline will fail. You should be checking the result
// of calling getline before performing the next action.
//
// I am not sure of the affect on `currentLine` when
// the call fails it is either made blank or left unchanged.
// So the last line in your list is either empty
// or a repeat of the last item in the file.
getline (idFile, currentLine);
cities.push_back(currentLine);
The correct way to use a loop to read a file is:
std::ifstream file("FileName");
std::string line;
// Notice we check the result of the `getline()` as part of
// the loop condition. So we only enter the loop body if the
// read happened correctly.
while(std::getline(file, line))
cities.push_back(line);
If all your cities are guaranteed to be one word. you can use a simpler trick.
std::vector<std::string> cities(std::istream_iterator<std::string>(file),
std::istream_iterator<std::string>());
Threading and Networking.
I would say that threading and networking is not the best way to demonstrate a good threading application.
The problem is that your thread spends an awful lot of its time blocked waiting for network to catch up and give it data. Usually what you do is have a single thread handling many connections simultaneously and use an even library to tell the thread when there is action on any of the sockets. I like libEvent for this it is one of the more modern ones.
But Curl has its own way of doing this with the "Multi Handle". You can create many curl handles and attach them to a multi handle then a single thread will watch all the handles simultaneously.
Curl Handle re-use.
Like Edward mentioned (but worth re-mentioning). Reusing the same curl handle is essential to efficiency when connecting to the same site.
Code Review
Lets stop using C-casts. They are hard to see and dangerous.
((std::string*)userp)->append((char*)contents, size * nmemb);
To make it obvious for code review please use the C++ casts. They emphasis how dangrerious they are and make them visable:
reinterpret_cast<std::string*>(userp)->append(static_cast<char*>(contents), size * nmemb);
edited Jan 4 at 18:57
answered Jan 4 at 18:44
Martin York
70.9k481244
70.9k481244
add a comment |Â
add a comment |Â
up vote
4
down vote
Prefer using a lock guard:
int city_index = 0;
std::lock_guard<std::mutex> guard(city_counter_lock);
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
This makes sure that an exception will not leave the mutex locked;
You will need to handle failures to get the webpage. If res != CURLE_OK then a failure occurred somewhere.
You are encouraged to reuse the CURL
handle you can do so by creating it inside fetch_weather_async
before the loop and passing it into fetchJson
void fetch_weather_async()
CURL *curl;
curl = curl_easy_init();
if(!curl) return;
while (current_city + 1 < (int) cities.size())
int city_index = 0;
std::lock_guard<std::mutex> guard(city_counter_lock);
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index, curl);
std::cout << "acquired " << city_index << std::endl;
curl_easy_cleanup(curl);
void fetchJson(int i, CURL *curl)
CURLcode res;
std::string readBuffer;
curl_easy_setopt(curl, CURLOPT_URL, (URL_BASE + cities[i]).c_str());
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);
res = curl_easy_perform(curl);
if(res == CURLE_OK)
std::cout << i << " : " << readBuffer << std::endl;
1
It's absolutely vital to reuse the CURL handle. If you don't, it will open and close a new TLS session for every request which is very slow and wasteful of CPU cycles at both ends.
â Edward
Jan 4 at 15:12
Not reusing the CURL handle was a terrible mistake. I can't believe I did such a thing. Thank you for showing me.
â Janith Jeewantha
Jan 5 at 3:52
add a comment |Â
up vote
4
down vote
Prefer using a lock guard:
int city_index = 0;
std::lock_guard<std::mutex> guard(city_counter_lock);
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
This makes sure that an exception will not leave the mutex locked;
You will need to handle failures to get the webpage. If res != CURLE_OK then a failure occurred somewhere.
You are encouraged to reuse the CURL
handle you can do so by creating it inside fetch_weather_async
before the loop and passing it into fetchJson
void fetch_weather_async()
CURL *curl;
curl = curl_easy_init();
if(!curl) return;
while (current_city + 1 < (int) cities.size())
int city_index = 0;
std::lock_guard<std::mutex> guard(city_counter_lock);
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index, curl);
std::cout << "acquired " << city_index << std::endl;
curl_easy_cleanup(curl);
void fetchJson(int i, CURL *curl)
CURLcode res;
std::string readBuffer;
curl_easy_setopt(curl, CURLOPT_URL, (URL_BASE + cities[i]).c_str());
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);
res = curl_easy_perform(curl);
if(res == CURLE_OK)
std::cout << i << " : " << readBuffer << std::endl;
1
It's absolutely vital to reuse the CURL handle. If you don't, it will open and close a new TLS session for every request which is very slow and wasteful of CPU cycles at both ends.
â Edward
Jan 4 at 15:12
Not reusing the CURL handle was a terrible mistake. I can't believe I did such a thing. Thank you for showing me.
â Janith Jeewantha
Jan 5 at 3:52
add a comment |Â
up vote
4
down vote
up vote
4
down vote
Prefer using a lock guard:
int city_index = 0;
std::lock_guard<std::mutex> guard(city_counter_lock);
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
This makes sure that an exception will not leave the mutex locked;
You will need to handle failures to get the webpage. If res != CURLE_OK then a failure occurred somewhere.
You are encouraged to reuse the CURL
handle you can do so by creating it inside fetch_weather_async
before the loop and passing it into fetchJson
void fetch_weather_async()
CURL *curl;
curl = curl_easy_init();
if(!curl) return;
while (current_city + 1 < (int) cities.size())
int city_index = 0;
std::lock_guard<std::mutex> guard(city_counter_lock);
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index, curl);
std::cout << "acquired " << city_index << std::endl;
curl_easy_cleanup(curl);
void fetchJson(int i, CURL *curl)
CURLcode res;
std::string readBuffer;
curl_easy_setopt(curl, CURLOPT_URL, (URL_BASE + cities[i]).c_str());
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);
res = curl_easy_perform(curl);
if(res == CURLE_OK)
std::cout << i << " : " << readBuffer << std::endl;
Prefer using a lock guard:
int city_index = 0;
std::lock_guard<std::mutex> guard(city_counter_lock);
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
This makes sure that an exception will not leave the mutex locked;
You will need to handle failures to get the webpage. If res != CURLE_OK then a failure occurred somewhere.
You are encouraged to reuse the CURL
handle you can do so by creating it inside fetch_weather_async
before the loop and passing it into fetchJson
void fetch_weather_async()
CURL *curl;
curl = curl_easy_init();
if(!curl) return;
while (current_city + 1 < (int) cities.size())
int city_index = 0;
std::lock_guard<std::mutex> guard(city_counter_lock);
if (current_city + 1 < (int) cities.size())
city_index = ++current_city;
std::cout << "requesting " << city_index << std::endl;
fetchJson(city_index, curl);
std::cout << "acquired " << city_index << std::endl;
curl_easy_cleanup(curl);
void fetchJson(int i, CURL *curl)
CURLcode res;
std::string readBuffer;
curl_easy_setopt(curl, CURLOPT_URL, (URL_BASE + cities[i]).c_str());
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &readBuffer);
res = curl_easy_perform(curl);
if(res == CURLE_OK)
std::cout << i << " : " << readBuffer << std::endl;
edited Jan 4 at 15:10
Edward
44.4k374202
44.4k374202
answered Jan 4 at 14:40
ratchet freak
11.4k1240
11.4k1240
1
It's absolutely vital to reuse the CURL handle. If you don't, it will open and close a new TLS session for every request which is very slow and wasteful of CPU cycles at both ends.
â Edward
Jan 4 at 15:12
Not reusing the CURL handle was a terrible mistake. I can't believe I did such a thing. Thank you for showing me.
â Janith Jeewantha
Jan 5 at 3:52
add a comment |Â
1
It's absolutely vital to reuse the CURL handle. If you don't, it will open and close a new TLS session for every request which is very slow and wasteful of CPU cycles at both ends.
â Edward
Jan 4 at 15:12
Not reusing the CURL handle was a terrible mistake. I can't believe I did such a thing. Thank you for showing me.
â Janith Jeewantha
Jan 5 at 3:52
1
1
It's absolutely vital to reuse the CURL handle. If you don't, it will open and close a new TLS session for every request which is very slow and wasteful of CPU cycles at both ends.
â Edward
Jan 4 at 15:12
It's absolutely vital to reuse the CURL handle. If you don't, it will open and close a new TLS session for every request which is very slow and wasteful of CPU cycles at both ends.
â Edward
Jan 4 at 15:12
Not reusing the CURL handle was a terrible mistake. I can't believe I did such a thing. Thank you for showing me.
â Janith Jeewantha
Jan 5 at 3:52
Not reusing the CURL handle was a terrible mistake. I can't believe I did such a thing. Thank you for showing me.
â Janith Jeewantha
Jan 5 at 3:52
add a comment |Â
up vote
1
down vote
When you use threads, never never ever use globals. Ever.
Although it is a good suggestion, it would be great to expand on it to provide more context.
â Incomputable
Jan 5 at 12:33
1
The context is very general: no globals make following tasks easier: unit testing, code reuse, multiple instances of objects, porting, refactoring, adding threading (since everything is self contained you just need to worry about locking inside the file). I think all nonstatic nonconstant globals should be banned from C standard at least when -pedantic flag is used. We can do without them. Specially extern on variables should be deprecated.
â Martin
Jan 5 at 19:33
1
Comment is way to general. Good advice but I am sure we could come up with situations were global mutable state is required (synchronization). Rather than sayNot Non-Static Non-Constant
SayGlobal "Mutable" State
is a bad idea.
â Martin York
Jan 5 at 20:31
add a comment |Â
up vote
1
down vote
When you use threads, never never ever use globals. Ever.
Although it is a good suggestion, it would be great to expand on it to provide more context.
â Incomputable
Jan 5 at 12:33
1
The context is very general: no globals make following tasks easier: unit testing, code reuse, multiple instances of objects, porting, refactoring, adding threading (since everything is self contained you just need to worry about locking inside the file). I think all nonstatic nonconstant globals should be banned from C standard at least when -pedantic flag is used. We can do without them. Specially extern on variables should be deprecated.
â Martin
Jan 5 at 19:33
1
Comment is way to general. Good advice but I am sure we could come up with situations were global mutable state is required (synchronization). Rather than sayNot Non-Static Non-Constant
SayGlobal "Mutable" State
is a bad idea.
â Martin York
Jan 5 at 20:31
add a comment |Â
up vote
1
down vote
up vote
1
down vote
When you use threads, never never ever use globals. Ever.
When you use threads, never never ever use globals. Ever.
answered Jan 5 at 9:15
Martin
1192
1192
Although it is a good suggestion, it would be great to expand on it to provide more context.
â Incomputable
Jan 5 at 12:33
1
The context is very general: no globals make following tasks easier: unit testing, code reuse, multiple instances of objects, porting, refactoring, adding threading (since everything is self contained you just need to worry about locking inside the file). I think all nonstatic nonconstant globals should be banned from C standard at least when -pedantic flag is used. We can do without them. Specially extern on variables should be deprecated.
â Martin
Jan 5 at 19:33
1
Comment is way to general. Good advice but I am sure we could come up with situations were global mutable state is required (synchronization). Rather than sayNot Non-Static Non-Constant
SayGlobal "Mutable" State
is a bad idea.
â Martin York
Jan 5 at 20:31
add a comment |Â
Although it is a good suggestion, it would be great to expand on it to provide more context.
â Incomputable
Jan 5 at 12:33
1
The context is very general: no globals make following tasks easier: unit testing, code reuse, multiple instances of objects, porting, refactoring, adding threading (since everything is self contained you just need to worry about locking inside the file). I think all nonstatic nonconstant globals should be banned from C standard at least when -pedantic flag is used. We can do without them. Specially extern on variables should be deprecated.
â Martin
Jan 5 at 19:33
1
Comment is way to general. Good advice but I am sure we could come up with situations were global mutable state is required (synchronization). Rather than sayNot Non-Static Non-Constant
SayGlobal "Mutable" State
is a bad idea.
â Martin York
Jan 5 at 20:31
Although it is a good suggestion, it would be great to expand on it to provide more context.
â Incomputable
Jan 5 at 12:33
Although it is a good suggestion, it would be great to expand on it to provide more context.
â Incomputable
Jan 5 at 12:33
1
1
The context is very general: no globals make following tasks easier: unit testing, code reuse, multiple instances of objects, porting, refactoring, adding threading (since everything is self contained you just need to worry about locking inside the file). I think all nonstatic nonconstant globals should be banned from C standard at least when -pedantic flag is used. We can do without them. Specially extern on variables should be deprecated.
â Martin
Jan 5 at 19:33
The context is very general: no globals make following tasks easier: unit testing, code reuse, multiple instances of objects, porting, refactoring, adding threading (since everything is self contained you just need to worry about locking inside the file). I think all nonstatic nonconstant globals should be banned from C standard at least when -pedantic flag is used. We can do without them. Specially extern on variables should be deprecated.
â Martin
Jan 5 at 19:33
1
1
Comment is way to general. Good advice but I am sure we could come up with situations were global mutable state is required (synchronization). Rather than say
Not Non-Static Non-Constant
Say Global "Mutable" State
is a bad idea.â Martin York
Jan 5 at 20:31
Comment is way to general. Good advice but I am sure we could come up with situations were global mutable state is required (synchronization). Rather than say
Not Non-Static Non-Constant
Say Global "Mutable" State
is a bad idea.â Martin York
Jan 5 at 20:31
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f184275%2fc-multi-threading-demo%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
Does
curl_easy_setopt
copy theCURLOPT_URL
? If it doesn't, the(...).c_str()
will get invalid after the line and you end up with a pointer to freed memory.â Zeta
Jan 4 at 15:06
I problem never entered my mind. However, the method runs successfully again and again for all the URLs. So I guess the string is copied.
â Janith Jeewantha
Jan 4 at 15:54
1
@Zeta curl makes a copy of every char* that is passed in.
â ratchet freak
Jan 4 at 16:10
1
Of course curl will handle multiple simultaneous connections using
curl multi handle
. This is a much more elegant way of doing it because you can easily handle thousands of network requests on a single thread.â Martin York
Jan 4 at 18:11