What better time to blog about charity than during Ramadan, the month of giving? In late 2015, we partnered up with LaunchGood, a crowdfunding platform, to study ways to improve the overall success of the different charitable campaigns they support. We decided to tackle the problem from a data-driven perspective: we examined two years worth of data on campaigns and donors. Here is a detailed technical report of our key findings.
tl;dr — the short version of the paper:
We had a simple idea: if we could redistribute funds from campaigns that raised more than their fundraising goals to other less successful campaigns, we would increase the platform efficiency (measured as campaigns meeting their goals). This is rather obvious and requires no data analysis to affirm. It is also quite problematic: donors would not appreciate a platform redistributing their donations across unknown campaigns without their explicit permission. Such a scheme could potentially incentivize campaign organizers to inflate their goals or spend less effort creating persuasive campaigns.
What is less obvious and more interesting is whether we can achieve efficiency gains by (i) only redistributing the funds of a donor within campaigns she contributed to and (ii) within the time frame that her contributions and campaign periods overlap. Without access to real data, it is difficult to answer this question. First, it not clear how many repeat donors exist on a platform to support this form of redistribution. Second, it is not clear how often contributions to different campaigns by the same donor overlap. Armed with two years worth of data generously provided by LaunchGood, we set out to answer this question.
Our paper demonstrates the viability of such a redistribution: redistributing only among campaigns a donor has contributed to does increase efficiency. Also, even if a small proportion of repeat donors accept such redistributions, one can still achieve efficiency gains.
So why is it that crowdfunding platforms don’t support such redistributions?
It is tricky! Understanding the psyche of donors is crucial to maximizing contributions. As Obama’s team of campaigning strategists point out, the more complicated and longer the donation process is, the fewer the donations that occur. So if you ask donors to select multiple campaigns to co-fund and to fine-tune how redistributions occur, the hassle of the process itself might turn them off. If you redistribute without explicit permissions, you will anger your base of donors.
Simpler redistribution schemes might still work. For example, LaunchGood just rolled out an automatic giving plan for Ramadan, where donors provide LaunchGood a set budget to distribute funds from every day for 30 days as they see fit among different campaigns. An exit opt-in strategy might also work for repeat donors. On completing a subsequent donation, the system can ask them if they would be willing to redistribute across earlier contributions and the most recent one.
Designing new systems & interfaces that increase efficiency and engage more donors is definitely a rich area for HCI research.
An effective student-advisor relationship is the foundation of good academic research. This relationship is often structured around weekly meetings.
As a student, keep in mind that your research problem is your main and only work focus and you are expected to initiate and test out ideas as well conduct the majority of the creative (design prototypes, UIs, design experiments, code, think of a proof structure, etc.) or grunt (code, prove, conduct experimental runs, etc.) work.
The advisor is usually your backup, wiser brain. Often, the advisor presents you with the research problems. She will likely guide you through the problem, outline solutions, remind you of the big picture, refer you to papers, make you think of alternative solutions, designs, implementations, unstick you if you find yourself stuck, help you analyze or figure out the experimental data, and so on. The advisor, however, is a busy, multitasking machine, often advising multiple students with varying demands on her time, teaching courses, writing grants, building research networks, serving on conference committees, or dealing with university business. I never appreciated the faculty workload until I became an assistant professor.
The advisor brain is thus an expensive resource, which you must efficiently manage. I hope you would find some benefit in these advisor meeting & management tips:
1) Keep a weekly meeting: Meet your advisor at least once a week for roughly 40 minutes to an hour. Meeting every other week leads to slow progress and doesn’t help you make progress. The CS research cycle roughly gives you three chances to publish a year, four months in between conference deadlines. If your advisor meetings occur twice a month, you are not making enough progress to target one of those conferences a year. Meeting twice a week (unless you are approaching a deadline) can add unnecessary stress and pressure on you. Here’s why:
2) Make progress at every meeting: Advisors love to see new results. Research to them can be as exciting as a good TV series. Imagine a series where the story remains roughly the same from one episode to the next: the approval ratings will surely drop and the show will be canceled. If you made no progress, cancel the meeting, but be sure to come back the following week with a grand opening. The first episode of a tv series after a break, almost always reminds you why you have been watching this show religiously in the first place.
3) Have a meeting agenda: Prepare for your meetings. You want to get the most from your advisor because you won’t meet them again for a week. While advisors don’t disappear, and often want to get emails from you, nothing beats a face-to-face meeting in terms of creative energy and problem resolution. Here is a meeting agenda for a meeting I had with one of my advisors, Joe Hellerstein:
Note the simple agenda structure. The first column represents the topic. If you work on multiple projects or mini projects within a big one this is where you list them. Order the content by the priority of the topic. The two middle columns are succinct lists of updates/problems encountered and discussion areas. The last column is a very important one that usually stays empty before the meeting. During the meeting you take notes here and then summarize a plan for the next meeting.
The agenda must be short and easy to scan within 1-3 minutes. Please feel free to follow my simple agenda template (My sister advised me on this agenda structure). Leave your big ideas for:
3) Bring slide decks / tech reports / sketches / demos: If the agenda is the enticing trailer, the slide deck or report is the real show. Don’t overthink the presentation of these. The goal is to get to the heart of the matter as quickly as possible: what have you done, what are the problems and how you intend to address them. If you ran an experiment, put your results in a slide deck and use the notes panels to jot down your findings. If you thought of several interfaces, bring your raw pencil sketches for discussion. If you implemented some UI features, run a demo: keep an always running version of the demo to get realistic feedback from your advisor. If you did build a lot of the backend but the UI is buggy and nothing works now, make sure you have an exact timeline of when things will be ready for feedback. If you implemented a novel algorithm, report initial testing results or schedule time for a code review. If you read some papers, explain them. Be honest about what you understood and what you didn’t. Do not provide abstract rehashes of papers, instead go through a mini presentation of the papers’ key contributions and findings.
4) Address issues discussed at previous meetings: Do not leave your advisors hanging. While we often appear to have serious memory problems, we do remember your projects and we keep a mental visualization of where you currently are with respect to the big picture. Ignoring advisor suggestions regularly (such as experiments to run, algorithms to try, etc.) without justification will often hamper your research and will create bad rapport.
5) Make sure all meeting artifacts are easily accessible: Consider storing all these artifacts in a shared drop box folder. You would be amazed how these meeting notes and preparation materials can help you write a paper later.
6) Schedule meetings at the right time: I’m a morning person. By late afternoon, I’m about 60% functional. If you want my full attention, you need to schedule a morning slot. If you want to get the most of the advisor, aside from making them look forward to their meetings with you, make sure you meet at the time that is best for both your advisor and yourself. If you are slow in the mornings, don’t meet your advisor in the morning, instead go for an 11:00 am meeting. If you are prone to post-lunch comas, don’t meet right after lunch. Give yourself at least an hour prep before the meeting to ensure that you know exactly what you want to show and tell and what you need help with. Set up a calendar invite with a fixed meeting time on a per semester basis. Agree on this fixed meeting time at the start of every semester. Setting up these things even if you think they are minor conveniences goes a long way in showing that you care about your research and you respect the time of those you work with. They in turn will respect your time and effort and bring their very best to these meetings.
7) Email the agenda before the meeting and attach the extras. Even if you agree to share a dropbox folder with the materials. It doesn’t hurt to send a reminder email first thing in the day with the agenda and additional materials (slide decks, reports, etc) to mentally prepare your advisor for your much anticipated visit.
Parallel databases or MapReduce, your technology of choice for data processing on clusters will depend on performance. The technology that delivers a response to your information quests first wins. We could argue on the relative importance of performance vs. ease of use, but at the end of the day, faster is stronger. This is the zeitgeist of the 21st century. As Daft Punk sings it: “Work it harder, make it better; Do it faster, makes us stronger; More than ever, Hour after; Our work is never over.” So putting this debate to rest by quoting lyrics from a pop-album, lets settle which technology is faster. Read the rest of this entry »
Who knew that coming up with a statistical model that fits your problem requires a bit of daydreaming, coffee and people-watching. I got my inspiration for modeling MapReduce behavior in the face of failures from my grocer, Raj. I was trying to figure out, how much free time Raj gets in between customers and is that idle time enough to say read a few pages from a book. Read the rest of this entry »
This post follows an earlier post motivating a statistical comparison of the performance of MapReduce and parallel databases in the face of failures.
I rarely paid attention in 10th grade Chem, but radioactivity was too cool to sleep through and so I still remember this: The life of a radioactive, Carbon-14, nucleus is unstable and unpredictable. Eventually, it disintegrates (into a more a stable nucleus), but always with a bang (it emits radiation). We can’t predict when an atom in a lump of Carbon-14 will decay but we can predict the collective decay rate of that lump and that in about 57 hundred years, half of the Carbon-14 atoms in the lump will disintegrate. Ten years later, I see the relevance of high-school Chem to cluster computing: A cluster of machines is not too different from a Carbon-14 lump. System admins can’t really predict which machine will fail in the next second. With experience, they can say how many machines might fail in a day; they can estimate the cluster’s decay rate. Read the rest of this entry »
With petabytes of data to process, we are limited to using clusters of shared-nothing parallel machines. No single machine has the memory or processing capacity to handle such amounts of data. So we divide and conquer: we divide the processing work and data across many machines. The more data we have, the more machines we throw in: we scale horizontally instead of scaling vertically (i.e. we add more commodity machines instead of using a more powerful machine with more memory, more CPU power, more disk, etc). Database systems have done this since 1990, when the first horizontally-scalable parallel database, Gamma, was created. Many commercial systems followed. Database systems however never scaled past a 100 machines. They didn’t need to … Read the rest of this entry »
Mathematics is a language. We use it to describe and quantify things. Our first exposure to the language is when we learn to describe counts of things: one apple, two cats, three dogs, etc. Later in life, we use Mathematics innocuously: when we order a pizza, we order a certain diameter – 16″. Our sub-concious mathematician, visualizes the area of the pizza as π*(16″/2)². It then splits the pizza eight-ways and figures out that we probably need another large pizza to feed the guests on game night. In our day-to-day lives, this deductive language is never spoken except when it renders a result (we need another pizza) or succinctly describes an event (a 16″ pizza). We are out of practice when it comes to communicating with each other using our innate mathematical language. And so like a student that learns French grammar for a year and is dropped in Paris, a student with less than a year of college calculus finds himself incapable of communicating more than his name and awkwardly revealing how English is his first (and only) language when dropped into a graduate Maths course. Read the rest of this entry »