Quantcast
Channel: Palantir
Viewing all 31 articles
Browse latest View live

Announcing three open source projects for developing with TypeScript

$
0
0

Although Palantir is mostly known as a Java Swing shop, we have quietly (or not so quietly) been building for the web for a while now. From D3 for stunning graphics to Backbone and Angular for large-scale applications, we are using the latest and greatest web technologies to build the next generation of our products. As we’ve made the transition from native to web applications, we’ve tried to preserve some features of the strong development process that has served us so well in the native world: robust unit and end-to-end testing, continuous integration builds, and most importantly, a developer experience that makes it easy and fun to write the data analysis platforms and applications that our customers rely on to solve their most important, most complex data problems. We found TypeScript’s optional typing and forward-looking adoption of certain ECMAScript 6 features to be a great fit for those needs, but we wanted a bit more tooling. So today we’re pleased to announce three new open source projects we’ve undertaken to help fill out the TypeScript ecosystem.

Eclipse TypeScript plug-in

While we love Sublime Text at Palantir, we thought it would also be cool to have more of an IDE experience for TypeScript (probably because it is something we are familiar with from Java). Although there was already excellent support via WebStorm, we really wanted to have a first-class experience in Eclipse as well since it’s our primary development environment for our server code. We knew this would be a pretty involved task given all the functionality offered by a modern IDE, but one of our summer interns, Tyler Adams, bravely took on the project. The TypeScript team has done a great job of exposing a lot of the basic services necessary to build a robust IDE experience: auto-completion, syntax highlighting, code compilation, etc. All we had to do was figure out how to hook up these great services into Eclipse via a plug-in. One of the fun aspects of this project was that since parts of the plug-in are written in TypeScript, we were really motivated to get features done to make writing the plug-in itself easier. By the end of the summer we got pretty far (check out the feature list) and were so happy with the plug-in that we converted all of our code over to TypeScript (more about this below). There is always room for more improvement though, so we’d love to have your suggestions and code contributions. You can check out the project here: https://github.com/palantir/eclipse-typescript.

TypeScript auto-complete in Eclipse.

TSLint

Our backend code is mostly written in Java and we love tools like FindBugs and Checkstyle for ensuring that coding best practices are followed and common pitfalls are avoided. One of our engineers in NYC, Ashwin Ramaswamy, wanted to have some of those same assurances for our TypeScript code. He looked into using existing tools such as JSHint or ESLint but decided that we really needed something that would work directly on the TypeScript code. Each summer, we throw a Hack Week which is a week during which all normal work stops and everyone works in teams on fun projects they would like to see come to fruition. This past summer, Ashwin chose to create TSLint as his project. He didn’t just stop with the linter though – there are also plug-ins for Eclipse and Grunt to make it easier to use. Check it out at: https://github.com/palantir/tslint.

CoffeeScript to TypeScript converter

We use CoffeeScript for most of our projects at Palantir but we found that for some really large ones, optional typing can be a real boon to productivity. We learned this lesson the hard way after trying to maintain a medium-sized application and finding that we had to write an incredible amount of unit tests to be certain that even the simplest of edits (like renaming a variable) were safe. This summer, we asked one of our interns, Jared Pochtar, to work on an automated method of converting our CoffeeScript code to TypeScript. He came up with the brilliant idea to slightly modify the CoffeeScript compiler to output TypeScript instead of JavaScript. If you’d like to try out converting your codebase to TypeScript as well you can check out his work here: https://github.com/palantir/coffeescript-to-typescript.

With the recent release of TypeScript 0.9.5, we’ve begun to see the path towards 1.0 really take shape. We think TypeScript is going to be a great addition to the web community. We are really looking forward to see where it goes next and what people do with it!


New Office for NCMEC, New Possibilities

$
0
0

We officially welcomed our long-time partner The National Center for Missing & Exploited Children (NCMEC) to Palo Alto last Thursday. Since 1984, NCMEC has worked tirelessly to protect children throughout the country from abduction and exploitation. It is a mission that we are extremely proud to support by donating our software and engineering expertise.

Though NCMEC maintains a regional southern California office, a local presence in Silicon Valley will further boost their efforts to work closely with the technology industry. A formidable technological response is required to stay ahead of child predators and track down missing and exploited children. Palantir has been assisting NCMEC in this regard since 2010, and their new proximity will make this partnership even more beneficial and productive.

NCMEC’s new Palo Alto office.

With Palantir, NCMEC analysts can make sense of large volumes of dispersed data and draw connections to help law enforcement fight the abduction and sexual exploitation of children. Sources such as internal NCMEC databases, case reports, public records, open source data, websites, social media, maps, images, videos, can all contribute clues that can help bring children to safety. By integrating these databases into one place, analysts can search and investigate the data in minutes or seconds. This frees up time to do more work and provide greater assistance to law enforcement. Learn more about how NCMEC is using Palantir in the video below.

NCMEC’s new location will help yield more success stories, like the case of a 17-year old girl who was reported missing and potentially involved in child sex trafficking. Through various searches, a NCMEC analyst was able to find multiple posts online that advertised this missing child for sex. Through information in the ads, the analyst was able to tie them to other posts from the same pimp. The analysis included more than 50 advertisements, 9 different females, and a trail that covered 5 different states. A link analysis graph was created using Palantir that allowed law enforcement to easily see the large scope of the ring. This insight helped law enforcement link the pimp to a multitude of other crimes and other girls that he victimized.

With NCMEC now firmly planted in Palo Alto, we encourage other tech firms to support NCMEC’s life-saving work. We are incredibly proud and inspired to work with NCMEC, and welcome them to the Valley.

To learn more about our work with NCMEC, download the latest Impact Study.

How Many Years a Slave?

$
0
0


Each year, human traffickers reap an estimated $32 billion in profits from the enslavement of 21 million people worldwide. And yet, for most of us, modern slavery remains invisible. Its victims, many of them living in the shadows of our own communities, pass by unnoticed. Polaris Project, which has been working to end modern slavery for over a decade, recently released a report on trafficking trends in the U.S. that draws on five years of its data. The conclusion? Modern slavery is rampant in our communities.

January is National Slavery and Human Trafficking Prevention Month, and President Obama has called upon “businesses, national and community organizations, faith-based groups, families, and all Americans to recognize the vital role we can play in ending all forms of slavery.” The Polaris Project report, Human Trafficking Trends in the United States, reveals insights into how anti-trafficking organizations can fight back against this global tragedy.

For the past year our Philanthropy Engineering team has partnered with Polaris Project to provide them with our software and engineering expertise. In addition to serving victims and advocating for anti-trafficking public policy, Polaris Project operates the National Human Trafficking Resource Center Hotline (NHTRC). Victims and witnesses of human trafficking can call the hotline to report tips, request help, and connect with anti-trafficking services. Palantir has been instrumental in helping victims and callers quickly access the resources and help they need.

Polaris Project CEO Bradley Myles recently sat down with us to discuss how our software enables Polaris to do more than respond to individual calls to the hotline by discovering connections between cases and identifying global trafficking patterns and networks.

Footage of Polaris Project courtesy of William Caballero.

Polaris Project uses Palantir Gotham to leverage the data from nearly 100,000 calls. NHTRC may collect up to 170 different quantitative and qualitative variables per case record. These data originate from disparate sources—calls, emails, SMS, online tip reports, and publicly available information about trafficking. By integrating this data into a single platform, along with their national referral database of 3000 contacts that includes anti-trafficking organizations, legal service providers, shelters, coalitions, task forces, law enforcement, and social service agencies, Polaris can locate emergency response resources and identify critical services for victims of trafficking in a matter of seconds.

In September 2013, we were honored to host Bradley Myles at Palantir Night Live, where he described the full scale and scope of human trafficking worldwide and shared his thoughts on how technology can help eradicate this crime once and for all.

We are proud to work with Polaris Project to help bring this issue back into the light, raise awareness, and combat the problem wherever it appears.

Going International with the Palantir Council of Advisors on Privacy and Civil Liberties

$
0
0

In 2012, our PCL team assembled the Palantir Council of Advisors on Privacy and Civil Liberties (PCAP), a body of experts in the privacy and civil liberties field who help us understand and address the complex privacy and civil liberties issues that arise in the course of building a sophisticated data analytics platform. The group continues to meet on a regular basis to discuss an ever-growing array of topics and provide invaluable advice to assist Palantir in enhancing the privacy and civil liberties protections built into our powerful analytic platforms.

In light of our growth in international markets and the general globalization of privacy, civil liberties, data protection, and human rights issues, the PCAP has decided to expand to add four new members who will bring more international expertise to the discussions. We are pleased to welcome the following new members to the PCAP:

  • Alex Deane– Currently the Head of Public Affairs at Weber Shandwick, Alex was also the founding Director of Big Brother Watch, a prominent U.K. civil liberties advocacy organization.
  • Sylvain Metille– Head of the Technology and Privacy practice at BCCC Attorneys-at-law LLC in Switzerland, where he specializes in data protection and surveillance issues.
  • Omer Tene– Vice President of Research and Education at the International Association of Privacy Professionals, Managing Director of Tene & Associates, and Deputy Dean of the College of Management School of Law, Rishon Le Zion, Israel, Omer is also a senior fellow at the Future of Privacy Forum.
  • Nico van Eijk– Professor of Media and Telecommunications Law and the Director of the Institute for Information Law at the University of Amsterdam, Nico has written extensively on a number of highly technical topics.

The PCAP is looking forward to interesting new perspectives and valuable contributions from our new members.

The Palantir Scholarship for Women in Engineering Finalists Are Here

$
0
0

Virtual labs where doctors and prosthetic designers can collaborate in the same “room.” Touch screen interfaces for cars. Robots that work with autistic children. Sequencing the genome of cancer cells.

These are just some of the projects that the 2013 Palantir Scholarship for Women in Engineering finalists are working on. These nine women were invited to visit Palantir’s Palo Alto headquarters on February 28 for a full day of demonstrations, interviews, panel discussions, and a tour. The event was meant to offer a taste of what a modern, collaborative working environment looks like, as well as a sampling of the kinds of issues involved in Palantir’s work. After a long day of conversation and discovery, the finalists and their Palantirian hosts were inspired by the variety of meaningful work being pursued by women in computer science and STEM fields.


The finalists in a session with Angela Muller

Palantir initially established the scholarship in 2012 to support underrepresented populations in computer science. This year the scholarship extended eligibility to women in all the STEM disciplines, from computer science to engineering, life sciences, and more. The goal behind this expansion was to provide opportunities not only to potential developers, but to women in all technical fields who might build or rely on technology like Palantir’s in their labs or elsewhere in their work.


Eugenia Gabrielova

With this broader perspective in mind, the scholarship essay prompt this year asked applicants to choose a data set and describe how they might analyze it to find insights into how best to promote opportunities for women in technology. This year’s winner, Eugenia Gabrielova, suggested using the GitHub Archive data to analyze patterns of women’s contributions to software projects and to build mentorship networks. A third year Ph.D. student in Information and Computer Science at U.C. Irvine, Gabrielova is no stranger to the challenges scoped out in her proposal—she was the only woman in her graduating class at Northwestern to receive an engineering degree in Computer Science. She credits faculty members for treating her as an equal, not a special case. “They never pushed me to ‘represent ladies in C.S.,’” she said, “They met me where I was.”

Each of the finalists approached the scholarship with unique academic and professional interests, but they all shared a passion for supporting and nurturing women’s contributions in technology. Meena Boppana, an undergraduate at Harvard, started a math club for girls in low-income charter middle schools in response to the frustration she felt as the only girl on her own high school math team. Grace Gee, also at Harvard, is starting a national college initiative to help recognize and reward women involved in technology. She hopes her initiative will do for others what a similar program did for her as a high school student looking for ways to explore opportunities outside of her small hometown in southeast Texas.

The Palantir Scholarship for Women in Engineering is just one small part of a larger, growing movement to support women in technology. This movement stands to benefit not just individual women and their careers, but their professional and academic communities as well. “By excluding women, we’re missing a huge brain trust,” said Henny Admoni, a finalist and first-year Ph.D. at Yale University. Palantir is therefore pleased to congratulate all of its finalists on their achievement and commitment to their respective communities.

  • Eugenia GabrielovaPh.D. candidate, Information and Computer Science; University of California-Irvine
  • Meena Boppana -  A.B. candidate, Computer Science; Harvard University
  • Grace Gee - S.M. candidate, Computational Science and Engineering; A.B. candidate, Computer Science; Harvard University
  • Annie Liu - Ph.D. candidate, Computer Science; Princeton University
  • Henny Admoni - Ph.D. candidate, Computer Science; Yale University
  • Edna Sanchez - B.A.S. candidate, Computer Engineering; University of British Columbia
  • Preeti BhargavaPh.D. candidate, Computer Science; University of Maryland
  • Tracy BallingerPh.D. candidate, Bioinformatics; University of California-Santa Cruz
  • Kayo Teramoto - B.S. candidate, Electrical Engineering and Computer Science; Yale University


The 2013 Finalists. Back row, left to right: Kayo Teramoto, Annie Liu, Tracy Ballinger, Henny Admoni, Edna Sanchez; Front row, left to right: Grace Gee, Preeti Bhargava, Eugenia Gabrielova

Mapping the Syrian Crisis with the Carter Center

$
0
0


A heatmap of Syrian conflict events in the first half of 2014

The Syrian conflict represents one of the largest humanitarian crises in the world today. With millions displaced, billions of dollars of aid flowing to refugees, and a war raging, the safe and effective delivery of humanitarian aid to the vulnerable populations affected by the conflict has become one of the world’s hardest problems. To help with this problem, our partners at the Carter Center’s Syria Conflict Mapping Project have been using Palantir Gotham to analyze the open source data around the conflict and understand the opposition’s political and military structure. The result is a living archive of active militant groups, what they are doing and where they are operating, and how they relate to other armed groups.

The Syrian conflict is notable for the amount of related open source information being published through social media platforms such as YouTube and Facebook. Analysts at the Carter Center have carefully coded over 70 attributes from thousands of videos, which together represent thousands of individual conflict events and the formation of over 5,600 armed groups in which some 100,000 fighters have appeared. Analysts coded information including where and when events occurred, what weapons they observed, symbolism and uniforms that appear in the videos, and evidence of mass atrocities.

The ability to integrate and interact with this kind of data within a unified analytic environment can dramatically improve the way humanitarian organizations respond to conflicts. As the Carter Center’s Christopher McNaboe said in a recent piece about the Syrian Conflict Mapping Project in Forbes,

The platform we use—Palantir Gotham—is one of the best analytical tools in existence today, and has already helped us find meaningful trends in our data that would otherwise have been lost in a series of disconnected network diagrams, Excel spreadsheets, and reports.

By integrating and analyzing this information in Palantir Gotham (as in the video above), the Carter Center can better help humanitarian organizations deliver aid to those who need it most. In a rapidly evolving conflict where the security of aid workers and civilians is paramount, real-time data analysis provides international actors with situational awareness of the conflict as it unfolds, allowing them to operate both more efficiently and more safely. Furthermore, Palantir Gotham’s granular access controls and auditing capabilities allow for both raw data and analysis to be shared responsibly and securely, while ensuring that collaborators understand the sourcing and pedigree of the data in question.

If you want to learn more, some of the results of the Carter Center’s analysis can be seen in their latest public nationwide report. We are proud to be helping the Carter Center advance this critically important work.

AtlasDB: Transactions for Distributed Key-Value Stores (Part I)

$
0
0

AtlasDB is a massively scalable datastore and transactional layer that can be placed on top of any key-value store to give it ACID properties. This is the first of several blog posts that will introduce AtlasDB and describe how we built it at Palantir.

Building AtlasDB: the Inspiration

In 2010, Daniel Peng and Frank Debek of Google put out a paper entitled, Large-scale Incremental Processing Using Distributed Transactions and Notifications. The paper describes a system in use at Google named Percolator, which sits on top of BigTable, Google’s distributed key-value store.

Google needed transactions with ACID properties. They also needed a highly fault-tolerant system to deal with the potential failure of key parts of their system under load at massive scales. This drove them to push the accounting data for the transactions into BigTable as well, as BigTable already handled both replication of its data and fault tolerance.

Unfortunately, due to the number of writes involved in the transaction accounting, they saw an approximately 75% performance hit when using transactions. Percolator was built to enable incremental updates to the search indexes for the Google search engine, which was previously a periodic batch process. In this case, the extra reliability afforded by using BigTable to track the transactions was the important factor; even though there was a significant performance hit (over using raw BigTable), the overall performance was high enough to meet its design criteria.

Meanwhile, at Palantir, we were hitting a similar obstacle. The interactive analytic core of Palantir Gotham, which was originally built with a traditional RDBMS as its backing store, was hitting the limits of economical and easy scaling. We needed to move to a distributed backing store for scalability, but we needed transactions to enable our higher-level Revisioning Database to work correctly.

Percolator presented interesting possibilities, but with a 75% performance hit, the latency would be too long for our users. We shoot for a maximum ten second wait time when doing interactive analysis—anything longer is an unacceptable interruption to our users’ investigative flow. Studying the Percolator protocol, our engineers saw some places where design constraints could be relaxed to lower the latency of each operation.

And so, the idea for AtlasDB was born. Now it was just a matter of building it.

Designing AtlasDB

Understanding Data and Development at Palantir

We take a human-centered design approach to building software. Instead of asking what technological problem we want to attack in isolation, we ask, “What features and infrastructure would a human analyst need to do his or her work?” To answer this question, we use a holistic understanding of how low-level data integration, scalable data servers, API layers, and an entire suite of user interface tools, when properly integrated, create an efficient, frictionless user experience for non-technical subject matter experts working with large-scale, disparate data to solve real-world problems. It’s an over-arching user experience (UX) play that decomposes into a lot of hard technical problems—similar to building something as complex and seamless as an iPhone.

When components already exist that serve our needs, we are happy to use them—our products use several of the high-quality open source datastores, map-reduce frameworks, and search engines. But we build new things whenever we identify a capability gap. Some examples of this in past:

  • The Revisioning Database, the heart of Palantir Gotham that enables Nexus Peering
  • Nexus Peering, a technology that allows a single Palantir Gotham instance to be distributed, or for multiple instances to securely and safely exchange data
  • The Maps application, which allows the viewing of geospatial imagery and also the visualization and analysis of objects with associated coordinates
  • Horizon, an in-memory database that drives interactive querying over billions of objects in interactive time, used to back the Object Explorer application.

Scaling, Round 1: Federation and Palantir Phoenix

In 2005, when we first started building Palantir Gotham, there wasn’t really a viable alternative to the RDBMS. The Revisioning Database, the Palantir Gotham persistent datastore, was originally an implementation of a special schema inside a SQL database. The SQL RDBMS performed well for our users until up to about a terabyte of data. But as the size and scope of Palantir Gotham-based analytic workflows pushed the database to its limits, there were only two available options if we stuck with a RDMBS:

  • Get a larger, more powerful computer. This works, but the price of computer hardware and advanced database software needed to support that scale grows super-linearly (sometimes exponentially) with the size of the data, making this approach really expensive, really fast.
  • Move to multiple computers and a sharded database architecture. While this can work well for certain database schema, our schema is not well-suited to this approach. Sharding can also add a lot of complexity to the application using it, leading to a more bug-prone and fragile code base.

We didn’t like either of these options, so we began considering non-RDBMS-based solutions. We started with a federated approach that let us address much larger scales of source data without scaling the core. We developed Palantir Phoenix, a petabyte-scale datastore that can run map-reduce and other batch-oriented processes to filter and classify data that needs to be reviewed by human analysts. By federating search and storage of massive-scale data out to Palantir Phoenix, and importing relevant results into Palantir Gotham on the fly, we could guarantee analysts would still have all the data they need at their fingertips without storing everything in the Palantir Gotham Revisioning Database.

For example, a cyber security workflow may include network flows data, proxy logs, malware scanning results, inventory data, authentication, and VPN logs. The vast majority of the data in these data sets are not of interest to cyber security analysts—they overwhelmingly represent legitimate traffic. But when something bad happens, such as malware being detected on a user’s laptop, analysts can pull the relevant and related narrow slices of data from Phoenix into Palantir Gotham to determine the extent and severity of the intrusion. Using our Dynamic Ontology, data is mapped into the Palantir object model and managed in three separate subsystems:

  • a search server for fast full-text and wildcard searching;
  • a Horizon server for top-down filtering and comparisons of large sets of objects;
  • the Revisioning Database for tracking changes to the data and allowing analysts to work independently while also sharing analytic insights with each other. (This is also where the metadata that enables Palantir’s privacy and security controls is stored.)

While the size of the data that ends up in Palantir Gotham can be much smaller than the total data size of an instance, it can still get pretty big. Moreover, it doesn’t help that all of the housekeeping Palantir Gotham does around the source of the data (e.g. the revisions and security information) requires us to store 2-5x more information than just the size of the initial imported source data.

Scaling, Round 2: NoSQL K/V stores

It soon became clear that we were going to need to replace the datastore for the Revisioning Database. An obvious place to look for economical scalability was a class of datastores dubbed, ‘NoSQL’.

NoSQL datastores use collections of commodity-class machines working in concert, enabling engineers to build a distributed system capable of scaling up to large data volumes and high performance with a smooth price curve—add more machines, get more capacity. When we first built the Revisioning Database in 2005, NoSQL systems offered intriguing potential as an approach but were still plagued with performance, scale, and most importantly, data loss and corruption problems. In the intervening years, these early problems have largely been engineered away.

Today, these systems underlie much of the modern web and are developed and used by companies like Google, Facebook, and Amazon). Many of these use a key-value model (K/V), wherein a short, unique identifier (the key) is used as a key to access an individual storage cell (the value). The storage cell may hold a simple value or a larger, complex data structure.

While NoSQL stores have great scaling properties, they don’t make great guarantees about the consistency of the system. Since each node of the system runs independently and any given value could lie on any node, it’s impossible to know if any read of more than one node is consistent, i.e., came from the same write. For many uses (e.g., updating fields on a social network profile page), this property (called eventual consistency) is not a problem.

Unfortunately, for a system like Palantir Gotham, we’re not just storing individual values but sets of related values that need to read and write consistently, like many index entries along with a primary value. A lack of consistent read means that any operation that uses values from multiple keys can never be guaranteed to be correct.

Fortunately, implementing transactions can solve this problem by providing four guarantees, referred to as ACID:

  • Atomicity - every distinct operation of a transaction succeeds or the state is rolled back as if the transaction never happened; there is no way to partially complete the update of multiple fields
  • Consistency - data is in a consistent state at the beginning and at the end of a transaction
  • Isolation - the work taking place inside a transaction is invisible to any other operation taking place, so two transactions can be run concurrently without interfering
  • Durability - once a transaction commits successfully, the data must have been written to non-volatile storage in such a way that it won’t be lost in the event of a crash or power failure

Aside from the formal guarantees provided by transactions there is a very practical consideration: without transactions, programmers have to reason very carefully about consistency and write a lot of code to try to manage it. As a result, development proceeds slowly and the code is much more fragile to future changes. Pushing the consistency (and by extension, correctness) logic down into a transaction layer is usually the right separation of concerns.

Setting the Stage for AtlasDB: A Transactional API for Key-Value Stores

AtlasDB Architecture

The design of AtlasDB departs from Percolator in a few key aspects. By taking key locking out of the main datastore and into a dedicated lock server, the write overhead was lessened, increasing performance. Further improvements were gained by allowing the transaction accounting table to live in a datastore separate from the main scalable datastore. This decoupling allows the transaction data to live in a system that gains higher write performance in exchange for less scalability. Since the transaction accounting data is quite compact, this is a huge win for performance of the overall system. (We’ll cover the protocol and architecture in-depth in a later post.)

The NoSQL-transaction revolution still required a few more developments to make the burgeoning AtlasDB as engineer- and user-friendly as possible. Along with the core changes to the transaction protocol and system architecture, our team set about designing a system that could be used with almost any key-value store. Rather being tied to a particular key-value store, we decided to build an API layer that exposed transaction primitives. The API layer (effectively just a code library) along with a few lightweight network services created a system that could be applied to any key-value store. The writing of the driver for any new key-value store thus became a one-day task comprising, at most, a few hundred lines of code.

This is a good idea for a few reasons:

  1. Deployment flexibility - An application built on top of AtlasDB would always see the same API, allowing the key-value store to be switched out for different levels of scale. Palantir Gotham needs to run at scales as small as a single laptop, both as a development environment for creating customizations and enhancements and in order to support disconnected, self-contained operation for environments that don’t have any networking infrastructure. It also needs to operate at petabyte scale for large production systems. Key-value stores that run at massive scale are usually somewhat complex to setup and administer; the easy-to-set-up key-value stores don’t scale.
  2. Pace of change in NoSQL - Though much more advanced than it was a decade ago, the NoSQL world has not yet reached full maturity. The capabilities, scale, and performance of different offerings is still rapidly evolving. Keeping AtlasDB free from committing to a single key-value store is essential, given the uncertainty around what the best options will be even a year or two from now.
  3. Consistent API - The bulk of the team’s work in completing AtlasDB was not building the transactional layer itself, but in porting our extensive, existing data fusion platform over from SQL to a key-value model of data storage. By abstracting the API into the AtlasDB transactional layer, we are preventing having to port the entire product to yet another API in the future—switching datastores is as easy as writing a new K/V driver.

AtlasDB today

AtlasDB is now the default datastore inside of new production instances of Palantir Gotham and is already being re-purposed for other use cases that need both the scalability and consistency that it offers. Stay tuned for part two of the AtlasDB series, where we’ll do a deep dive into the system architecture and transaction protocol that make it go.

Analyzing the Present and Possible Futures of the U.S. Veterans Population with CNAS and Palantir Metropolis

$
0
0

The U.S. veteran population includes more than 21 million men and women who have served their country in uniform, ranging from veterans of the “greatest generation” who fought in World War II to veterans completing their first enlistment. Within this population are 6 million “Gulf War-era” veterans who were part of, or have served since, the first Gulf conflict in the early 1990s.

The available data on veterans is scattered, inconsistent, and in differing formats. This is a major challenge for the government agencies and NGOs that seek to serve veterans, as they often have to plan how to dedicate financial and human resources across a wide geography, and plan for major projects and facilities years in advance. Veterans service organizations often struggle to connect with veterans because they lack data relevant to the geographic areas they serve. With the help of Palantir Metropolis, researchers from the DC-based think tank Center for a New American Security (CNAS) are analyzing the present veterans populations and developing projections to understand the possible future paths of this population.

The Center for a New American Security (CNAS) is the only think tank of its kind with a research team dedicated to veterans and military personnel. To better understand the current veterans population and its changing needs, this team is using the Palantir Metropolis platform to fuse a multitude of publicly available datasets from various government agencies, including U.S. Department of Veterans Affairs (VA), the Census Bureau, the Department of Defense, and the U.S. Department of Housing and Urban Development (HUD). These public datasets consist solely in high-level, aggregated data and can be accessed through government agency websites including the Census Bureau, VA, DOD and HUD.

At the CNAS Annual Conference in June, senior fellow Phil Carter presented a number of findings from this initiative and discussed their implications for veterans policy going forward. (Full video of presentation available here).


Data source: Department of Veterans affairs, powered by Metropolis

For example, in 2015, Gulf-war era veterans will overtake Vietnam veterans as the largest segment of the veteran population; with this shift comes a new set of concerns for the Department of Veterans Affairs and other government agencies tasked with serving veterans. Instead of retirement and geriatric care, younger veterans seek employment and rehabilitative sports medicine. They often have young families, are more diverse racially and by gender, and are more likely to move to cities.

Additionally, the current veteran populations is most heavily concentrated in the Sunbelt regions of the south and southwestern United States, reflecting a primarily retirement-age population, as seen in the map below.


Data Source: US Census Bureau, powered by Palantir Metropolis

However, employment opportunities in urban areas, near military bases, or related to specific events like the American energy boom are likely to attract younger veterans and shift this population significantly in coming years.

This kind of analysis can also help illuminate the situation for veterans in more specific areas and enables better planning by the VA and veterans service organizations. In the wake of the recent problems with VA waiting lists at facilities in Phoenix, AZ, for example, CNAS looked at Arizona counties by veterans population and VA spending to better understand the situation.

The top two maps show the Arizona veterans population by county in absolute numbers (top left) and per capita (top right), as well as the amount spent by the VA annually in these counties (GDX) in absolute terms (bottom left) and per capita (bottom right). These maps are interesting because in both cases the absolute highest number of veterans and highest amount of money spent is, unsurprisingly, in Maricopa County, where Phoenix is located. However, a more complicated picture emerges when looking at per capita population and spending, where Maricopa County is no longer the highest. The wide disparity between per-veteran spending figures suggests a number of potential conclusions, including huge economies of scale for serving urban veterans, and concentrations of high-need veterans in rural areas where it may be most expensive for the VA to serve them. Further analysis of this kind is necessary to understand what’s really going on in Phoenix VA facilities and how to better allocate resources and plan for the future care of veterans in those facilities.

At the conclusion of the conference presentation, CNAS débuted the first Palantir Metropolis dashboards to ever be completely public-facing, with no log-in required. They invited fellow researchers, policy makers, and veterans to join their effort to use veterans data to conduct better analysis and make better policy decisions for the future of America’s veterans and the government agencies that serve them.

To learn more about this exciting initiative, check out the CNAS conference presentation and the CNAS Palantir Metropolis dashboards at www.veteransdata.org.


Following The Ivory

$
0
0

Mapping the Global Ivory Poaching Supply Chain with C4ADS

For the past year, the Washington, DC-based non-profit C4ADS has been using Palantir to investigate elephant poaching and ivory trafficking, an illicit trade that is at its highest level in 25 years. Their new report, Out of Africa: Mapping the Global Trade in Illicit Elephant Ivory, reveals how specific criminal groups in Asia and Africa sustain the ivory trade and identifies common routes and potential bottlenecks in international transport, such as major shipping ports that ivory is likely to pass through.

It’s not the first time C4ADS has made headline news. Last year, C4ADS gained recognition after its researchers used Palantir to investigate a network of commercial ships trafficking arms into Syria from Ukraine. Since then, C4ADS has continued to use Palantir Gotham as their analytical system to investigate illicit networks across the world. Elephant poaching and ivory trafficking is another aspect of C4ADS’ work, and it shows novel insights into how such activities are used to fund organized crime and terrorist and insurgent actors.

By integrating and analyzing information from disparate sources into Palantir, including data on ivory seizures, armed conflict, maritime shipping routes, and local socioeconomic factors, C4ADS was able to release their first report back in April. Ivory’s Curse: The Militarization & Professionalization of Poaching in Africa explored how elephant poaching is inextricably tied to regional violence, as the black market for ivory generates cash for insurgents, militaries, and transnational organized crime. C4ADS analysts used Palantir to identify protected areas in Africa that appear to be at particularly high risk for armed conflict due to ivory poaching, and profiled seven separate regional case studies.


Using armed conflict data weighted by number of casualties, C4ADS examines how conflict in the DRC is clustered around numerous elephant ranges.

Following this analysis, C4ADS used Palantir to collect detailed information on ivory seizures and build out a picture of the networks of individuals and commercial organizations connected to consolidating, shipping, selling, and purchasing ivory. They showed, for instance, how much of the illegal ivory being sold is laundered through the legal ivory market in China, which has the largest market for both legal and illegal ivory in the world. According to the report, Chinese traffickers are instrumental in nearly every stage of the supply chain.

By shining a light on how exactly ivory is transported from elephant ranges in Africa to points of sale in Southeast Asia, C4ADS hopes that they can help authorities identify vulnerabilities in the ivory supply chain and better focus their resources. They also hope that this work will help the international community see how ivory poaching, far from being simply a conservation issue, is directly tied to highly militarized armed groups and professional, transnational criminal syndicates.

The below infographic summarizes some of the insights from C4ADS’ latest report. For additional details on C4ADS’ work, be sure to see the Illicit Networks section of last year’s Philanthropy Engineering Annual Report.

Mapping the Global Ivory Poaching Supply Chain with C4ADS

How to Ace a Software Decomp Interview

$
0
0

Coding. Algorithms. System Design. These interviews, to assess core skills necessary for a successful career at Palantir, remain at the heart of our hiring process. But as our company and products have evolved, so has the range of skills we need in candidates. We’ve recently added a decomposition interview to our slate, and, as we’ve donein the past, we’re going to go ahead and offer some tips for how to do well in this portion of your Palantir interview.

What is Decomp?

Short for ‘decomposition of problems,’ the decomp interview helps us get a sense of how well you’re able to break down a problem into its nitty-gritty components, before the actual nitty-gritty building begins.

The software engineering problems we encounter are often nebulous and complex, and require engineers who can break them down, develop a reasonable first-pass solution, and independently take steps to execute that solution. Decomp is a collaborative process – it might involve some coding or sketching out of what the code may look like. But most often, it involves whiteboarding and brainstorming with another dev to talk through the potential feature set.

This reflects what working at Palantir is actually like. Engineers have a tremendous amount of freedom here; our devs aren’t given features described to full specifications simply to implement. Rather, our customers typically give us an idea of what they think they need, but don’t know how to explain fully or build themselves. Since our work is constantly evolving, decomp is essential for all of our devs to be able to strategically assess the work that lies ahead.

We need people we can trust to do the right thing without a lot of supervision—people who can own large projects and take them consistently in the right direction. Invariably, this means being able to communicate well with the people around you. Decomp boils down to being able to see and do the right thing. We want to see if you can take nebulous requirements and translate them into something actionable. The result might be a complex, well-scoped system, or just a basic solution that solves a particular need.

Interviews

Decomp questions tend to be about designing a system or feature. We are looking for you to break down the problem, identify potential roadblocks, and develop a solution. From there, you’ll be having the same kind of conversation with your interviewer as you might have with a colleague in a planning meeting. This means that you’ll be shaping the discussion and the problem space. Some tips:

  • Ask clarifying questions
  • Make a sketch so that you understand the system
  • Write a list of pros and cons for a given approach

In imitation of the collaborative environment at Palantir, your interviewer might suggest alternatives to your solution or point out problematic areas, looking for you to adjust or defend your course of action. Since there are many tradeoffs in any system we design, your interviewer will prod at your approach – don’t be afraid to defend elements that you feel are better than the alternative! But you should also be prepared to accept and incorporate a different implementation than what you had initially envisioned; flexibility is key in good decomp.

Try This At Home!

The best way to prepare for a decomp interview is to try it out yourself. There is really no substitute for practice, here.

  • Pick an open source project that interests you and figure out how you would design it. Bonus points if you then look at the real-world implementation and see what tradeoffs the authors decided to make.
  • Grab a coworker or classmate and try to do a design session with them. They should be involved in the session as well – together, the two of you should come up with the best solution possible.
  • Pick a problem and think through all of ways a solution might fail. What if there are hundreds of users? Millions of files? Tons of data? Building the intuition around how to suss out scale constraints will help you approach decomp in a structured way.

Just In Case…

We hope this post has given you a good idea of what to expect and what to prepare for in your decomp interview. But just in case, here are a few pointers on what to avoid:

  • Indecisiveness: We know it’s tempting to spend a long time considering many different paths of problem solving, or to defer to someone else to bear the impact of making decisions. But being unable to decide on a plan of action, or being unable to support your decisions, is not particularly valuable, and can slow the entire project down.
  • Stubbornness: On the other hand, if you’re unwilling to be open to new design ideas, that can be a red flag. At Palantir, we believe the best idea wins. Decomp is often a healthy balance of supporting your own work while still listening to input from others.
  • Creating solutions in a vacuum: a perfect solution is very nice, but often impractical. If your solution doesn’t have a clear context, it’s not going to be useful.
  • Unconstrained problem solving: Your interviewer is going to present an extremely broad problem – way too broad to tackle in its entirety in a 30-minute interview. Focus on the aspects that are most interesting to you, and avoid getting lost in the weeds.

Good luck!

Analyzing 10 Years of Commitments with the Clinton Global Initiative

$
0
0

In September, the Clinton Global Initiative (CGI) celebrated its 10th Annual Meeting. Since 2005, more than 3,100 Commitments to Action have been made, improving the lives of over 430 million people. These commitments are dedicated to a range of key issues around the world, including Disaster Relief and Recovery, Global Health, Women/Girls, and Clean Energy.

To help CGI understand and evaluate this large volume of commitment work, Palantir donated our software to CGI and brought their last nine years of data into the platform along with 20 datasets on various global indicators (such as the World Bank’s Development Indicators). CGI analysts then used Palantir to examine commitments across time, by country, by number of partners, topic area, and a number of other categories. This analysis enabled CGI to better assess which commitments have been effective, which haven’t, and what areas members could be doing more to address. The CGI team then drew a number of important conclusions that will help shape the direction of commitments made in the future. For example, the team noticed the high success rate of commitments built on multi-sector partnerships, such as those between NGOs and corporations, as well as the growing importance of information technology as a component of commitments.

This video clip, which aired at the CGI Annual Meeting, briefly summarizes our work together and its importance to CGI in planning for the future:

To visit the Interactive Commitment Portfolio, read the full CGI report, and check out a video from the Annual Meeting where Chelsea Clinton discusses the implications of this analysis, check out this Clinton Foundation blog post.

C4ADS helps East African authorities and local NGOs seize over $10MM in illicit environmental goods

$
0
0

Throughout 2014, our partners at C4ADS have been using Palantir Gotham to uncover illicit networks involved in elephant poaching and the lucrative illegal ivory trade. C4ADS has integrated and analyzed open source data to shine a light on transnational networks trafficking in elephant ivory and other illegal environmental goods, while working with a coalition of NGOs, government partners, and private corporations. As an example of direct impact from this work, C4ADS’ analysis recently helped East African authorities identify a likely illicit shipment, leading to the seizure of at least $11 million in illegal environmental goods and making for one of the largest environmental crime seizures in African history.

The video below was recorded by C4ADS to show how they use Palantir to piece together disparate data sets to build out an understanding of organizations involved in shipping ivory. The data they use includes open source maritime shipping data, African and Asian business directories, court records, and customs documentation. When integrated into a unified platform, C4ADS is able to use this data to find trends and uncover networks that otherwise might not be apparent to authorities. We are proud to support C4ADS’ impactful work, especially building off their work last year on the ‘Odessa Network,’ which led the Greek coastguard to seize 59 shipping containers of small arms en route from Ukraine to Syria. Stay tuned for more from C4ADS as their investigations progress, and we hope to have more news to share soon.

Food for Thought: Improving Farmer Livelihoods on 3 Continents

$
0
0

With much of the US focusing on the dinner table at Thanksgiving, we at Palantir wanted to share a video about our work helping Grameen Foundation and other partner organizations use data to alleviate food insecurity and improve livelihoods for smallholder farmers around the world. Improving these farmers’ lives represents one of the world’s most important problems, as over two billion people make their living from agriculture, and smallholder farmers making up 70% of the world’s extreme poor. Watch the video below to see how our technology is helping our partners better target products and services to improve farmers’ productivity and sustainability, optimize farming practices to improve crop yields, and invest in services that work through more effective program monitoring & evaluation. It is critical for organizations working in the space to use data effectively in order to support these outcomes, and we are looking forward to building on this body of work in 2015 to continue to support our partners’ excellent work around the world.

Housing Homeless Veterans with Palantir Homelink

$
0
0

It is estimated that on any given night, 50,000 veterans are homeless in the United States, including over 12,000 Iraq and Afghanistan veterans. For homeless veterans across the U.S., the process of finding permanent housing can be difficult and slow. Homeless service providers struggle to meet the complicated federal and local requirements for funding and housing, and legacy databases are siloed and outdated. Veterans end up bouncing between various organizations, filling out dozens of forms, and sitting on housing waiting lists for up to 300 days.

Last summer, we partnered with Community Solutions to help homeless veterans find housing faster in cities across the U.S. We developed a pilot program with San Francisco through an iterative design process where our team shadowed homeless service providers at work in the city and learned about the housing match process from highly experienced non-profit and city government partners. We then tested a range of design and workflow concepts in consultation with these partners, resulting in our “Homelink” tool. Homelink is a secure, user-friendly platform for cities and non-profits to assess, track, and match veterans to housing within a single interface.

We continued to develop features and polish the design of Homelink as we kicked off our pilot with San Francisco in October, with users from San Francisco non-profit organizations, city government agencies, and the San Francisco Department of Veteran Affairs. Based on feedback from this pilot phase, including multiple on-site visits to these various homeless service providers, we have continued to develop Homelink and were able to launch a new expansion to the city of Nashville in December 2014 to help them end veteran and chronic homelessness.

In 2015, we plan to scale Homelink to 10 additional cities across the U.S., empowering them to get veterans and chronic homeless throughout their communities matched to housing quickly and easily.

One of our partnering organizations in San Francisco is the non-profit Swords to Plowshares, which is using Homelink to speed up the matching process with their veteran clients in the San Francisco area. Check out the brief video below to hear Paul Howard from Community Solutions and Dave Lopez from Swords to Plowshares talk about the problem of veterans homelessness and how they see Palantir helping to get veterans matched to housing:

Women in Cybersecurity Conversation and Cocktails

$
0
0

Please join us for an evening of conversation. Women in Cybersecurity will connect women who are leaders, experts, and practitioners in the cybersecurity field to share their diverse experiences and knowledge. The all-star panelists will address the evolving nature of the cybersecurity threat, challenges and opportunities for women in this domain, and successes and lessons learned for future leaders. Following the panel, attendees will have the opportunity to informally connect over cocktails and hors d’oeuvres to continue the conversation.

Hosted By

Melody Hildebrandt // Cybersecurity Lead at Palantir Technologies
Elena Kvochko // Lead, Partnership for Cyber Resilience at World Economic Forum

Panelists

Judith Germano // NYU Centre of Law and Security Fellow, Founder of GermanoLawLLC
Moira Kilcoyne // Co-Head of Global Technology and Data at Morgan Stanley
Allison Wikoff // Intelligence Analyst/Security Researcher at Dell SecureWorks

Time

Tuesday, March 10, 2015
6:30 – 8:30PM

Location

Palantir Technologies
15 Little West 12th Street
New York, NY 10014

Questions

Please contact: womenincybersecurity@palantir.com


Scholarship for Women in Engineering

$
0
0

At Palantir, our mission is to find or develop solutions for the world’s hardest problems. Since 2010, our annual scholarship has helped us discover and reward students doing just that. Our scholarship is now targeted towards undergraduate women and sponsored by our Women in Engineering group. This year, from a record pool of applicants, we chose nine finalists who we believe represent the best of what women in technology have to offer.

Our 2014 finalists’ projects solved disparate problems and incorporated principles from dozens of areas of study. The projects, which included everything from using blackberry juice to power streetlamps to developing an eco-friendly clothing dye, are focused on improving the way people around the world live and work. Some focus on improving the efficiency of small communities, like an application to help local government officials prioritize the needs of their community. Others help improve response to international crises, like an SMS-based platform to track global health epidemics or the enhancement of robotic vehicles to improve disaster response.

2014–2015 Scholarship Finalists

  • Elana Stroud, Computer Science and Design, Johns Hopkins University
  • Priyanka Sekhar, Computer Science and Artificial Intelligence, Stanford University
  • Nina Lu, Computer Science and Economics, University of Pennsylvania
  • Sarah Cen, Mechanical and Aerospace Engineering, Princeton University
  • Cyndy Marie Ejanda, Computer Science and Mathematics, Virginia Polytechnic Institute and State University
  • Lillian Tsai, Computer Science and Mathematics, Harvard University
  • Yiguang Zhang, Applied Mathematics, Statistics, and Computer Science, Johns Hopkins University
  • Alexandra Berges, Biomedical Engineering and Computer Science, Johns Hopkins University
  • Nicole Chernavsky, Bioengineering, University of California, Berkeley

After multiple rounds of application reviews and interviews, we chose this year’s winner: Johns Hopkins sophomore Elana Stroud. Elana is majoring in Computer Science and Design and built an application that allows local officials to crowdsource information about their constituents’ priorities. This allows them to more effectively allocate scarce city resources. Elana chose the project because she was motived to help her adopted city of Baltimore govern more effectively.

On-Site Visit

We invited these women to join us at our Palo Alto headquarters for a day of interviews, a tour and demonstration with Palantirians, and a few tastes of our notoriously delicious food. In the morning, finalists had interviews with Palantirians across the business focused on their academic backgrounds and scholarship projects. Once the hard part was over, finalists chatted with us on topics including preparing for life after college; how to make the most out of job interviews; and when to speak up, raise awareness, get involved, and improve the atmosphere for women in academia and industry.

The finalists had breakfast with members of our engineering team in Hobbit House, our company-wide dining area.

The day’s tour included a walk through our office space, gardens, and the Design Studio (shown above), where they saw mockups of our front-end user interfaces.

The finalists were treated to Chef’s Table, a gourmet tasting menu prepared by our Chef de Cuisine.

Angela Muller, one of our Forward Deployed Engineers, talked to the finalists about her experiences as a woman in the tech industry.

The finalists had a “Fireside Chat” with a variety of Palantirians, who provided their perspectives on school and what happens after graduation.

It is always our hope that hosting the finalists provides two-way value. We aim to show the finalists that their projects are just the beginning of their impact: women in technology are continuously working to make the world a better place. The finalists’ passion for the problems they are solving and their novel insights inspire us to expand the way we solve the world’s most intractable problems.

Palantir welcomes she++ College Ambassadors

$
0
0

At Palantir, we make supporting the next generation of technologists a priority. We believe an important part of that mission is supporting a diverse group of people interested in technology, which is why we partner with organizations like she++.

This year, she++ launched the College Ambassadors Program to extend the organization’s reach to women across the country. Individual ambassadors are selected for their passion and commitment to creating a welcome and inclusive environment in the tech community. 40 total ambassadors were selected for a trip to the Bay Area, where they met with local tech companies. The College Ambassadors stopped by our headquarters in Palo Alto last week and attended the she++ gala on Friday night.

During the visit, we showed a demonstration of our philanthropy work to give them a taste of how we use technology to help solve some of the world’s hardest problems.

After the demonstration, students met with Palantirians from our Women in Engineering group to discuss new applications for Palantir software. One idea that came up was how our software could help bring nutritious food options to low-income populations across the United States.

That evening, Palantirians attended the she++ Gala at the Computer History Museum to support all the work she++ is doing and the projects that the Ambassadors have taken on to improve technology education in their local communities across the nation.

Palantir is proud to support the next generation of #goodgirlsgonegeek!

Operation Double Trouble: Supporting Recovery Efforts in Wimberley

$
0
0

A series of flash floods hit Wimberley, Texas over Memorial Day weekend, sweeping away more than 1,000 homes and killing a dozen people. Within hours, Team Rubicon, a non-profit organization founded and run by military veterans, mobilized an all-volunteer force to help Wimberley recover from the devastation. We deployed a team of six Palantirians to work alongside Team Rubicon in “Operation Double Trouble,” including four interns who shared about their experiences below.

Stuart Guertin Carnegie Mellon University, Class of 2016

I was assigned to Strike Team Alpha, which was sent to a one-story home on the Blanco River that needed a full demolition. The river had swept away the porch, leaving its roof dangling in the air. The weight of porch had twisted the frame of the house beyond repair.

A neighbor agreed to lend his backhoe to our team so we could knock down the house and break it into manageable pieces. With the neighbor’s help, what would have been hours and hours of hard labor turned into a few swings of the bucket.

We then began the laborious process of loading the piles of rubble into wheelbarrows and pickup trucks and carrying it from the foundation to the road. With every board that went from the rafters to the road, the home turned into just a memory. But this sadness was offset in part by the opportunity that Team Rubicon provided. This homeowner now has a clean foundation to start rebuilding.

Lucy Cheng Harvard University, Class of 2016

When I think of Team Rubicon, the word that comes to mind is “family.” In the field, I was struck by how much each person cared about their work, which is very similar to what I’ve seen this summer at Palantir.

Rather than simply trying to get the jobs done as quickly as possible, Team Rubicon did whatever they could to improve lives, from trying to salvage a workbench and stone walls, to retrieving copper rods with monetary value. At one home, we finished a demolition in one day that would have taken the homeowner weeks of hard work.

Working with the members who had been with Team Rubicon for a while, I was also able to realize the game-changing impact of Palantir. As an engineer, I often take technology for granted, but the stories of pre-Palantir operations really illustrated its importance.

One of the reasons I was drawn to Palantir was that it solves real problems, and this experience definitely embodies that idea. I am so glad I was given the opportunity to witness Palantir at work, and I couldn’t have asked for a better mission.

Stuart Wheaton University of Michigan, Class of 2015

“We build software to solve the hardest problems.” I had heard the mission statement, but I didn’t truly understand it until I arrived in Wimberley to work on the front lines of the flood disaster relief. Team Rubicon moves fast and gets results.

I’m grateful for the opportunity to briefly deploy with Operation Double Trouble and see the impact of our software firsthand. I got to help out fellow humans in their time of need, meet some fantastic people along the way, and experience the culmination of the great efforts of many smart people to produce some seriously epic outcomes.

Jennifer Long University of Notre Dame, Class of 2016

When we arrived, roughly 60 Team Rubicon volunteers were split into two main teams serving two different towns. I was placed on an assessment team for most of the day, which meant that we drove around to different parts of the town to logdamage and place work orders.

We logged all of our work using Palantir Mobile, a smartphone application configured to support our software in the field. It was much more hectic than I expected: many records were duplicated or inconsistent, some residents were away from their homes, we got calls from the Forward Operating Base and from residents asking us to check other houses, and we even saw a few cows on the side of the road.

The next day, we got a live demonstration of how Team Rubicon uses Palantir to track overall progress and plan where to send teams. Since I had spent much of Saturday submitting reports, it was really cool to see how the data is used downstream. For Team Rubicon, Palantir means that they’re on the cutting edge of disaster assessments.

This trip was an absolutely amazing opportunity to see the real effect of the things we build and learn why we spend so much time coding, designing, and testing. Team Rubicon is truly a first-class organization, and I’m honored to have been a part of it.

Introducing the CPG Consortium

$
0
0

We’re partnering with J.P. Bilbrey, CEO of The Hershey Company, to build a consortium that redefines how the consumer packaged goods (CPG) industry uses data. This alliance of CPG manufacturers and retailers will share pre-competitive information through Palantir to get in front of emerging trends that impact the industry as a whole. By putting cutting-edge technology in the hands of visionary leaders, we’re helping our CPG partners evolve their business and serve consumers better in a rapidly changing industry landscape. Watch the video with CEOs J.P. Bilbrey and Alex Karp to learn more.

Palantir Hack Week 2015

$
0
0

Last week, we held our 7th annual Hack Week. Each year, when Hack Week rolls around, we pause all normal work and form ad-hoc teams to create prototypes of new features, or even whole new products.

And each year, a lively competition ensues. While innovation is a normal part of the work we do every day, Hack Week is intended to give everyone a blank canvas. During Hack Week, Palantirians build things that take us in unexpected and unplanned directions. It’s a week of unbound creativity, interrupted only by basic needs like food and sleep—and preparing a final presentation before time runs out.

It wouldn’t be Hack Week without the t-shirts to prove it.

GETTING BIGGER ALL THE TIME

Last year, Hack Week expanded to include our offices around the world. This year, we had a bigger and more global field than ever before. In total, over 100 Hack Week projects were submitted to the judges, and over 300 people around the world worked on Hack Week projects. And, because Hack Week fell in the summer, many interns joined teams to help build the innovative projects as well.

Hackers watching presentations—each team had 3 minutes to present their project to the judges.

THE PROJECTS + THE WINNERS

With over 100 submissions, our judges had no shortage of impressive projects to choose from but choose they had to. This year’s panel of judges included 3 directors and, for the first time ever, a co-founder. Our judges chose 6 Honorable Mentions in 6 different categories, 3 Runners Up, and 3 Overall Winners.

First, a sampling of Honorable Mention and Runner Up Hack Week projects:

  • Team Graphito created a version of the Gotham Graph for mobile, and their video really blew the judges away.
  • Team Shale built a product that builds a git-managed back end into Slate, unit tests for query validation, and provides automated promotion from staging to production.
  • Team Storytellers came up with a way to share Palantir with the world by make it easy to publish any Slate document to the web.

And now…for the Overall Winners:

  • Team Million Object Graph blurred the line between two core Palantir functions (Graph and Object Explorer) to allow users to work with “millions” of objects.
  • Team Contropolis devised a back end-agnostic time series analysis application to work with time series data from anywhere in the Palantir ecosystem—making technical analytical workspaces simpler for users.
  • Team Super Smash Browser architected a workspace application that embeds a modern web browser and provides rich interaction between the Palantir workspace and any website.

Shyam Sankar (one of this year’s 4 judges) announcing winners and showing off prize medals for Hack Week 2015.

Congratulations to the winners! Hack Week is a beloved tradition at Palantir, and Hack Week 2015 was full of big ideas, impressive execution, and non-stop fun. Can’t wait until next year!

Viewing all 31 articles
Browse latest View live