References
1.S. Guo, A. G. Parameswaran, and H. Garcia-Molina. So who won?:dynamic max discovery with the crowd. In SIGMOD, pages 385–396,2012.
2.Data-driven crowdsourcing: Management, mining, and applications. In ICDE, pages 1527–1529. IEEE, 2015.
3. Das, P. S. G. C., A. Doan, J. F. Naughton, G. Krishnan, R. Deep,E. Arcaute, V. Raghavendra, and Y. Park. Falcon: Scaling up hands-off crowdsourced entity matching to build cloud services. In SIGMOD, pages 1431–1446, 20
4. S. B. Davidson, S. Khanna, T. Milo, and S. Roy. Using the crowd for top-k and group-by queries. In ICDT, pages 225–236, 2013.
5. A. Doan, M. J. Franklin, D. Kossmann, and T. Kraska. Crowdsourcing applications and platforms: A data management perspective.Proceedings of the VLDB Endowment, 4(12):1508–1509, 2011.
6. J. Fan, M. Zhang, S. Kok, M. Lu, and B. C. Ooi. Crowdop: Queryoptimization for declarative crowdsourcing systems.
IEEE Trans. Knowl.Data Eng., 27(8):2078–2092, 2015.
7. M. J. Franklin, D. Kossmann, T. Kraska, S. Ramesh, and R. Xin.Crowddb: answering queries with crowdsourcing. In SIGMOD, pages 61–72, 2011.
8. J. Gao, Q. Li, B. Zhao, W. Fan, and J. Han. Truth discovery and crowd sourcing aggregation: A unified perspective.Proceedings of theVLDB Endowment, 8(12):2048–2049, 2015.
9. S. Guo, A. G. Parameswaran, and H. Garcia-Molina. So who won?:dynamic max discovery with the crowd. InSIGMOD, pages 385–396,2012.
10. G. Li, C. Chai, J. Fan, X. Weng, J. Li, Y. Zheng, Y. Li, X. Yu, X. Zhang, and H. Yuan. CDB: optimizing queries with crowd-based selections and joins. In SIGMOD, pages 1463–1478, 2017.
11. G. Li, Y. Zheng, J. Fan, J. Wang, and R. Cheng. Crowdsourced data management: Overview and challenges. In SIGMOD, 2017.[16] A. Marcus, D. R. Karger, S. Madden, R. Miller, and S. Oh. Counting with the crowd.PVLDB, 6(2):109–120, 2012.
12. A. Marcus, E. Wu, D. R. Karger, S. Madden, and R. C. Miller. Human-powered sorts and joins.PVLDB, 5(1):13–24, 2011.
13. A. Marcus, E. Wu, S. Madden, and R. C. Miller. Crowdsourced databases: Query processing with people. In CIDR, pages 211–214,2011.
14. A. G. Parameswaran, H. Garcia-Molina, H. Park, N. Polyzotis,A. Ramesh, and J. Widom. Crowd Screen: algorithms for filtering data with humans. In SIGMOD, pages 361–372, 2012.
15. H. Park and J. Widom. Crowdfill: collecting structured data from the crowd. In SIGMOD, pages 577–588, 2014.
16. J. Wang, T. Kraska, M. J. Franklin, and J. Feng. CrowdER: crowdsourcing entity resolution.PVLDB, 5(11):1483–1494, 2012.
17. J. Wang, G. Li, T. Kraska, M. J. Franklin, and J. Feng. Leveraging transitive relations for crowdsourced joins. InSIGMOD, 2013.
18. Kazemi, L., Shahabi, C., & Chen, L. (2013, November). Geotrucrowd: trustworthy query answering with spatial crowdsourcing. In Proceedings of the 21st acm sigspatial international conference on advances in geographic information systems (pp. 314-323). ACM.
19. To, H., Ghinita, G., & Shahabi, C. (2014). A framework for protecting worker location privacy in spatial crowdsourcing. Proceedings of the VLDB Endowment, 7(10), 919-930.
18. Roy, Senjuti Basu, Ioanna Lykourentzou, Saravanan Thirumuruganathan, Sihem Amer-Yahia, and Gautam Das. "Crowds, not drones: modeling human factors in interactive crowdsourcing." 2013.
19. Rahman, H., Thirumuruganathan, S., Roy, S. B., Amer-Yahia, S., & Das, G. (2015). Worker skill estimation in team-based tasks. Proceedings of the VLDB Endowment, 8(11), 1142-1153.
20. Amer-Yahia, S., & Roy, S. B. (2016). Human factors in crowdsourcing. Proceedings of the VLDB Endowment, 9(13), 1615-1618.
21. Roy, S. B., Lykourentzou, I., Thirumuruganathan, S., Amer-Yahia, S., & Das, G. (2014). Optimization in knowledge-intensive crowdsourcing. arXiv preprint arXiv:1401.1302.
22. Ikeda, K., Morishima, A., Rahman, H., Roy, S. B., Thirumuruganathan, S., Amer-Yahia, S., & Das, G. (2016). Collaborative crowdsourcing with crowd4U. Proceedings of the VLDB Endowment, 9(13), 1497-1500.
23. Morishima, A., Amer-Yahia, S., & Roy, S. B. (2014, September). Crowd4u: An initiative for constructing an open academic crowdsourcing network. In Second AAAI conference on human computation and crowdsourcing.
24. Esfandiari, M., Patel, K. B., Amer-Yahia, S., & Basu Roy, S. (2018, May). Crowdsourcing Analytics With CrowdCur. In Proceedings of the 2018 International Conference on Management of Data (pp. 1701-1704). ACM.
25. Esfandiari, M., Basu Roy, S., & Amer-Yahia, S. (2018, October). Explicit Preference Elicitation for Task Completion Time. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (pp. 1233-1242). ACM.
26. Esfandiari, M., Wei, D., Amer-Yahia, S., & Basu Roy, S. (2019, July). Optimizing Peer Learning in Online Groups with Affinities. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 1216-1226). ACM.
27. Salam, M. A., Koone, M. E., Thirumuruganathan, S., Das, G., & Basu Roy, S. (2019, May). A Human-in-the-loop Attribute Design Framework for Classification. In The World Wide Web Conference (pp. 1612-1622). ACM.
28. Anagnostopoulos, A., Becchetti, L., Castillo, C., Gionis, A., & Leonardi, S. (2012, April). Online team formation in social networks. In Proceedings of the 21st international conference on World Wide Web (pp. 839-848). ACM.
28. Rahman, H., Roy, S. B., Thirumuruganathan, S., Amer-Yahia, S., & Das, G. (2019). Optimized group formation for solving collaborative tasks. The VLDB Journal—The International Journal on Very Large Data Bases, 28(1), 1-23.
29. Jacob Abernethy, Yiling Chen, and Jennifer Wortman Vaughan. Efficient market making via convex optimization, and a connection to online learning. ACM Transactions on Economics and Computation, 1(2):Article 12, 2013.
30. Ittai Abraham, Omar Alonso, Vasilis Kandylas, Rajesh Patel, Steven Shelford, and Alek- sandrs Slivkins. How many workers to ask? Adaptive exploration for collecting high quality labels. In SIGIR, 2016.
31. Arpit Agarwal, Debmalya Mandal, David C. Parkes, and Nisarg Shah. Peer prediction with heterogeneous users. In ACM EC, 2017.
32. Haas, D., Wang, J., Wu, E., & Franklin, M. J. (2015). Clamshell: Speeding up crowds for low-latency data labeling. Proceedings of the VLDB Endowment, 9(4), 372-383.
33. Omar Alonso. Implementing crowdsourcing-based relevance experimentation: An industrial perspective. Information Retrieval, 16(2):101–120, 2013.
34.. Omar Alonso, Daniel E. Rose, and Benjamin Stewart. Crowdsourcing for relevance evalu- ation. ACM SigIR Forum, 42(2):9–15, 2008.
35. Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. Collaborative workflow for crowd- sourcing translation. In CSCW, 2012.
36. Paul Andr´e, Haoqi Zhang, Juho Kim, Lydia B. Chilton, Steven P. Dow, and Robert C. Miller. Community clustering: Leveraging an academic crowd to form coherent conference sessions. In HCOMP, 2013
37. Julia Angwin, Je Larson, Surya Mattu, and Lauren Kirchner. Machine bias: There’s software used across the country to predict future criminals and it’s biased against blacks. ProPublica article accessed at https://www.propublica.org/article/ machine-bias-risk-assessments-in-criminal-sentencing, 2016
38. Pavel D. Atanasov, Phillip Rescober, Eric Stone, Samuel A. Swift, Emile Servan-Schreiber, Philip E. Tetlock, Lyle Ungar, and Barbara Mellers. Distilling the wisdom of crowds: Prediction markets versus prediction polls. Management Science, 63(3):691–706, 2017.
39. Bahadir Ismail Aydin, Yavuz Selim Yilmaz, Yaliang Li, Qi Li, Jing Gao, and Murat Demir- bas. Crowdsourcing for multiple-choice question answering. In AAAI, 2014.
40. Solon Barocas and Andrew Selbst. Big data’s disparate impact. California Law Review, 104, 2016.
Jonathan Baron, Barbara A. Mellers, Philip E. Tetlock, Eric Stone, and Lyle H. Ungar. Two reasons to make aggregated probability forecasts more extreme. Decision Analysis, 11(2):133–145, 2014.
41. Joyce Berg, Robert Forsythe, Forrest Nelson, and Thomas Rietz. Results from a dozen years of election futures markets research. Handbook of experimental economics results, 1:742–751, 2008.
42. Michael Bernstein, Greg Little, Rob Miller, Bjoern Hartmann, Mark Ackerman, David Karger, David Crowell, and Katrina Panovich. Soylent: A word processor with a crowd inside. In UIST, 2010.
43. Anant Bhardwaj, Juho Kim, Steven P. Dow, David Karger, Sam Madden, Robert C. Miller, and Haoqi Zhang. Attendee-sourcing: Exploring the design space of community-informed conference scheduling. In HCOMP, 2014.
44. Jeffrey P. Bigham. Reaching dubious parity with hamstrung hu- mans. Blog post accessed at http://jeffreybigham.com/blog/2017/ reaching-dubious-parity-with-hamstrung-humans.html, 2017.
45. David M. Blei and John D. Lafferty. Topic models. Text mining: Classification, clustering, and applications, 10(71):34, 2009.
46. David M. Blei, Andrew Y. Ng, , and Michael I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3(Jan):993–1022, 2003.
47. Jordan Boyd-Graber, Yuening Hu, and David Mimno. Applications of topic models. Foun- dations and Trends in Information Retrieval, 11(2–3):143–296, 2017.
48. Jonathan Bragg, Mausam, and Daniel S. Weld. Crowdsourcing multi-label classification for taxonomy creation. In HCOMP, 2013.
49. Michael Brooks, Saleema Amershi, Bongshin Lee, Steven Drucker, Ashish Kapoor, and Patrice Simard. FeatureInsight: Visual support for error-driven feature ideation in text classification. In IEEE VAST, 2015.
50. Michael Buhrmester, Tracy Kwang, and Samuel D. Gosling. Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6(1):3–5, 2011.
51. Chris Callison-Burch and Mark Dredze. Creating speech and language data with Amazon’s Mechanical Turk. In NAACL HLT Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, 2010.
52. Alex Campolo, Madelyn Sanfilippo, Meredith Whittaker, and Kate Crawford. AI Now 2017 Report. Accessed at https://ainowinstitute.org/AI_Now_2017_Report.pdf, 2017.
53. Logan Casey, Jesse Chandler, Adam Seth Levine, Andrew Proctor, and Dara Z. Strolovitch. Intertemporal differences among MTurk worker demographics. Working paper on PsyArXiv, 2017.
54. Dana Chandler and Adam Kapelner. Breaking monotony with meaning: Motivation in crowdsourcing markets. Journal of Economic Behavior and Organization, 90:123–133, 2013.
55. Jesse Chandler, Pam Mueller, and Gabriele Paolacci. Nonna¨ıvet´e among Amazon Mechan- ical Turk workers: Consequences and solutions for behavioral researchers. Behavior Re- search Methods, 46(1):112–130, 2014.
56.Jesse J. Chandler and Gabriele Paolacci. Lie for a dime: When most prescreening responses are honest but most study participants are imposters. Social Psychological and Person- ality Science, 8(5):500–508, 2017.
57.Jonathan Chang, Jordan Boyd-Graber, Chong Wang, Sean Gerrish, and David M. Blei. Reading tea leaves: How humans interpret topic models. In NIPS, 2009.
58. Shuchi Chawla, Jason D. Hartline, and Balasubramanian Sivan. Optimal crowdsourcing contests. Games and Economic Behavior, 2015.
59. Yiling Chen and David M. Pennock. A utility framework for bounded-loss market makers. In UAI, 2007.
60. Yiling Chen and Jennifer Wortman Vaughan. A new understanding of prediction markets via no-regret learning. In ACM EC, 2010.
61. Yiling Chen, Arpita Ghosh, Michael Kearns, Tim Roughgarden, and Jennifer Wortman Vaughan. Mathematical foundations of social computing. Communications of the ACM, 59(12):102–108, December 2016.
62. Lydia Chilton, Juho Kim, Paul Andr´e, Felicia Cordeiro, James Landay, Dan Weld, Steven P. Dow, Robert C. Miller, and Haoqi Zhang. Frenzy: Collaborative data organization for creating conference sessions. In CHI, 2014.
63. Lydia B. Chilton, Greg Little, Darren Edge, Daniel S. Weld, and James A. Landay. Cascade: Crowdsourcing taxonomy creation. In CHI, 2013.
64. Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidi- vism prediction instruments. Big Data, Special Issue on Social and Technical Trade-Offs, 2017.
65. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In KDD, 2017.
66. Anirban Dasgupta and Arpita Ghosh. Crowdsourced judgement elicitation with endogenous proficiency. In WWW, 2013.
67. Susan B. Davidson, Sanjeev Khanna, Tova Milo, and Sudeepa Roy. Top-k and clustering with noisy comparisons. ACM Transactions on Database Systems, 39(4):35:1–39, 2014.
68. Philip Dawid and Allan Skene. Maximum likelihood estimation of observer error-rates using the EM algorithm. Journal of the Royal Statistical Society, Series C (Applied Statistics), 28(1):20–28, 1979.
69. Gianluca Demartini, Djellel Eddine Difallah, and Philippe Cudr´e-Mauroux. Zencrowd: Leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In WWW, 2012.
70. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large- scale hierarchical image database. In CVPR, 2009.
71. Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1):114, 2015.
72. Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. Overcoming algorithm aver- sion: People will use imperfect algorithms if they can (even slightly) modify them. Man- agement Science, 2016.
73. Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, Panagiotis G. Ipeirotis, and Philippe Cudr´e-Mauroux. The dynamics of micro-task crowdsourcing: The case of Ama- zon MTurk. In WWW, 2015.
74. Dominic DiPalantino and Milan Vojnovic. Crowdsourcing and all-pay auctions. In ACM EC, 2009.
75. Finale Doshi-Velez and Been Kim. Towards a rigorous science of interpretable machine learning. CoRR arXiv:1702.08608, 2017.
76. Mary T. Dzindolet, Linda G. Pierce, Hall P. Beck, and Lloyd A. Dawe. The perceived utility of human and automated aids in a visual detection task. Human Factors, 44(1): 79–94, 2002.
77. Robert C. Edgar and Serafim Batzoglou. Multiple sequence alignment. Current opinion in structural biology, 16(3):368–373, 2006.
78.Ju Fan, Guoliang Li, Beng Chin Ooi, Kian-lee Tan, and Jianhua Feng. icrowd: An adaptive crowdsourcing framework. In SIGMOD, 2015
79. Oluwaseyi Feyisetan, Elena Simperl, Max Van Kleek, and Nigel Shadbolt. Improving paid microtasks through gamification and adaptive furtherance incentives. In WWW, 2015.
80. Urs Fischbacher and Franziska F¨ollmi-Heusi. Lies in disguise: An experimental study on cheating. Journal of the European Economic Association, 11(3):525–547, 2013.
81. Chao Gao, Yu Lu, and Dengyong Zhou. Exact exponent in optimal rates for crowdsourcing.In ICML, 2016.
82. Xi Alice Gao, Yoram Bachrach, Peter Key, and Thore Graepel. Quality expectation-variance tradeoffs in crowdsourcing contests. In AAAI, 2012.
83. Xi Alice Gao, Andrew Mao, Yiling Chen, and Ryan Prescott Adams. Trick or treat: Putting peer prediction to the test. In ACM EC, 2014.
84. Yashesh Gaur, Florian Metze, Yajie Miao, and Jeffrey P. Bigham. Using keyword spotting to help humans correct captioning faster. In INTERSPEECH, 2015.
85. Yashesh Gaur, Florian Metze, and Jeffrey P. Bigham. Manipulating word lattices to incor- porate human corrections. In INTERSPEECH, 2016.
86. Timnit Gebru, Jonathan Krause, Jia Deng, and Li Fei-Fei. Scalable annotation of fine- grained objects without experts. In CHI, 2017.
87. Arpita Ghosh, Satyen Kale, and Preston McAfee. Who moderates the moderators? Crowd- sourcing abuse detection in user-generated content. In ACM EC, 2011.
87. Daniel G. Goldstein, R. Preston McAfee, and Siddharth Suri. The cost of annoying ads. In WWW, 2013.
88. Daniel G. Goldstein, Siddharth Suri, R. Preston McAfee, Matthew Ekstrand-Abueg, and Fernando Diaz. The economic and cognitive costs of annoying display advertisements. Journal of Marketing Research, 51(6):742––752, 2014.
89. Ryan Gomes, Peter Welinder, Andreas Krause, and Pietro Perona. Crowdclustering. In NIPS, 2011.
90. Joseph K. Goodman and Gabriele Paolacci. Crowdsourcing consumer research. Journal of Consumer Research, 44(1):196–210, 2017.
91. Mary L. Gray, Siddharth Suri, Syed Shoaib Ali, and Deepti Kulkarni. The crowd is a collaborative network. In CSCW, 2016.
92. Neha Gupta, David Martin, Benjamin V. Hanrahan, and Jacki O’Neil. Turk-life in India. In the Intnernational Conference on Supporting Groupwork, 2014.
93. Juho Hamari, Jonna Koivisto, and Harri Sarsa. Does gamification work? – A literature review of empirical studies on gamification. In Hawaii International Conference on System Sciences, 2014.
94. Robin Hanson. Combinatorial information market design. Information Systems Frontiers, 5(1):105–119, 2003.
95. Christopher G. Harris. You’re hired! An examination of crowdsourcing incentive models in human resource tasks. In WSDM Workshop on Crwdsourcing for Search and Data Mining, 2011.
96. Jeffrey Heer and Michael Bostock. Crowdsourcing graphical perception: Using Mechanical Turk to assess visualization design. In CHI, 2010.
Hannes Heikinheimo and Antti Ukkonen. The crowd-median algorithm. In HCOMP, 2013.
97. Chien-Ju Ho and Jennifer Wortman Vaughan. Online task assignment in crowdsourcing markets. In AAAI, 2012.
98. Chien-Ju Ho, Shahin Jabbari, and Jennifer Wortman Vaughan. Adaptive task assignment for crowdsourced classification. In ICML, 2013.
99. Chien-Ju Ho, Aleksandrs Slivkins, Siddharth Suri, and Jennifer Wortman Vaughan. Incen- tivizing high quality crowdwork. In WWW, 2015.
100. Chien-Ju Ho, Aleksandrs Slivkins, and Jennifer Wortman Vaughan. Adaptive contract de- sign for crowdsourcing markets: Bandit algorithms for repeated principal-agent problems. Journal of Artificial Intelligence Research, 55:317–359, 2016.
101. John J. Horton, David Rand, and Richard Zeckhauser. The online laboratory: Conducting experiments in a real labor market. Experimental Economics, 14(3):399–425, 2011.
102. Yuening Hu, Jordan Boyd-Graber, Brianna Satinoff, and Alison Smith. Interactive topic modeling. Machine Learning, 95:423–469, 2014.
103. Lilly C. Irani and M. Six Silberman. Turkopticon: Interrupting worker invisibility in Ama- zon Mechanical Turk. In CHI, 2013.
104. Ayush Jain, Akash Das Sarma, Aditya Parameswaran, and Jennifer Widom. Understanding workers, developing effective tasks, and enhancing marketplace dynamics: A study of a large crowdsourcing marketplace. Proceedings of the VLDB Endowment, 10(7):829–840, 2017.
105. Jongbin Jung, Connor Concannon, Ravi Shro, Sharad Goel, and Daniel G. Goldstein. Sim- ple rules for complex decisions. CoRR arXiv:1702.04690, 2017.
106. Radu Jurca and Boi Faltings. Mechanisms for making crowds truthful. Journal of Artificial Intelligence Research, 34:209–253, 2009.
107. Ece Kamar. Directions in hybrid intelligence: Complementing AI systems with human intelligence. Abstract for IJCAI Early Career Spotlight Track Talk, 2016.
108. Ece Kamar and Eric Horvitz. Incentives for truthful reporting in crowdsourcing (short paper). In AAMAS, 2012.