Skip to main content

IBM Research - Haifa Seminars

The seminars at IBM Research - Haifa bring lecturers from academia and the research community to our lab. The topics focus on applied computer science issues, in general, and on issues related to work carried out at IBM Research. All seminars listed on this site are open to the public.

IBM Research - Haifa is located on the Haifa University campus, Mt. Carmel.

Upcoming 2013 Lectures


Table header results
Tue, 7/5/2013
11:00 AM - 12:00 PM
SAT solving 101: Theory and Practice,
Gadi Aleksandrowicz, IBM Research - Haifa

Abstract: "Theory is where you know everything but nothing works.
Practice is where everything works but nobody knows why.
The lab is where theory and practice are combined: nothing works and nobody knows why".

SAT is the most famous NP-complete problem, and as such we know it very well in theory but have no idea how to solve it. In practice, however, SAT is solved everyday, although nobody is really sure why. In this lecture I will present the basic theory of SAT solving - what is SAT, why it is so important, what basic solution methods are used and how everyone can benefit from having a SAT solver at hand. No previous knowledge is assumed.

Gadi Aleksandrowicz is a member of HRL's SAT team. He received his PhD in Computer Science from the Technion under the supervision of Prof. Gill Barequet. He has won many teaching awards including the Technion's Continuously Excellent TA Award, and is the author of the Hebrew Mathematical blog "Not precise" (http://www.gadial.net).

Tue, 20/5/2013
11:00 AM - 12:00 PM
Creating Competitive Advantage with a Positioning Idea,
Yochi Slonim, Managing Partner, FFWD.me

Abstract: An inspiring presentation about the power of a positioning idea and how you can use it to differentiate yourself, your venture, research or product.

Over the last decade, the key problem for companies as well as individuals has changed in a dramatic way. It's no longer what they know about their "customers" that drives success. It is what their customers know about them that makes all the difference. In a single Google search, a prospective customer, or for that matter, anyone considering your idea or product will find 10 other things that look almost the same as what you are presenting. What can you do to give yourself a competitive advantage?

If they can't see why you are different in a way that matters to them, customers will not buy your product. If they can't understand your differentiation, investors will not invest in your venture. And if your difference doesn't pop out right away as a powerful idea, decision makers will not gamble on your initiative. Without a powerful positioning idea, you are facing an ongoing, uphill battle.

This talk will make you completely rethink what the word "positioning" really means and how to come up with a positioning idea that will give you a unique competitive advantage.

Yochi Slonim, a serial entrepreneur, is the founder and managing partner of FFWD.me, the startup fast forward program (www.ffwd.me) Since 2007, over 20 startup companies at all stages have used the program in diverse areas such as enterprise software, SaaS, mobile marketing, internet, telecom and chip development.

From 2000-2006, Mr. Slonim was founder and CEO of Identify Software, pioneering black box flight recorders for software applications. The company grew to $50m in sales and was acquired by BMC for $150m.

Mr. Slonim has been recognized by Forbes as a great leader with unique ideas and a different thinking.

http://www.forbes.com/sites/augustturak/2012/01/21/the-one-great-thing-that-every-great-leader-does/ From 1996-2000, he was Executive Vice President of products and marketing at Tecnomatix, a public NASDAQ company, which grew to sales of $100m and was later acquired by UGS for $220m. From 1989 – 1996, Mr. Slonim was a co-founder, CTO and VP R&D of Mercury Interactive, which became public in 1993, grew to over $1B in sales and was acquired by HP for $4.5B.

He holds a B.Sc. and M.sc in mathematics and computer science from the Hebrew university in Jerusalem.

Tue, 21/5/2013
11:00 AM - 12:00 PM
Enviromatics - Mathematical Programming Methods for Multi-dimensional Environmental Data,
Barak Fishbain, Environmental, Water and Agricultural Engineering Division, Faculty of Civil & Environmental Engineering in the Technion - Israel Institute of Technology Haifa, Israel

Abstract: As digital environments become increasingly complex, and the tools for managing information become increasingly advanced, it is essential to assist users in selecting their short term and long term attentional focus. To this end, many problems studied in the field of machine learning, try to emulate cognitive capabilities of a human. However, this anthropocentric and somewhat limited paradigm may no longer be the only source for inspiration. The variety and availability of sensors have made the accessible data much greater in quantity than the data that can be gathered and interpreted by a human being. In this talk novel mathematical programming approaches for multi-dimensional data analysis are presented. These methods are highly robust and most efficient which allows for the analysis of significantly large data sets. The described method has been utilized in many fields: environmental monitoring, illicit nuclear material detection, fatal accident analysis as well as image segmentation and video tracking. In this talk I will present the theoretical foundations of the method and will focus on two and a half applications: Analysis of air quality control, radiation source identification, and fatal traffic accidents analysis.

Barak Fishbain is an Assistant Professor at the Environmental, Water and Agricultural Engineering Division, Faculty of Civil & Environmental Engineering in the Technion - Israel Institute of Technology Haifa, Israel. Prior to his arrival to the Technion Dr. Fishbain served as an associate director at the Integrated Media Systems Center (IMSC), Viterbi School of engineering, University of Southern California (USC) and did his post-doctoral studies at the department of Industrial Engineering and Operations Research (IEOR) in University of California at Berkeley.

Prof. Fishbain's research focuses on Enviromatics, a new research field which aims at devising mathematical programming methods for machine understanding of trends and behaviors of built and natural environments. This includes Environmental Distributed Sensing (i.e., distributed air and water quality monitoring), Safety and Traffic Data Realization and Structural Sensory Networks.

Mon, 17/6/2013
11:00 AM - 12:00 PM
Industrial Strength Software Measurement,
David M. Weiss, Lanh and Oanh Nguyen Professor of Software Engineering Iowa State University

Abstract: In an industrial environment where software development is a necessary part of product development, measuring the state of software development and the attributes of the software becomes a crucial issue. For a company to survive and to make progress against its competition, it must have answers to questions such as "What is my customers' perception of the quality of the software in my products?", "How long will it take me to complete a new product or a new release of an existing one?" "What are the major bottlenecks in software production?" "How effective is a new technique or tool when introduced into the software development process?" The fate of the company, and of individuals within the company, may depend on accurate answers to these questions, so one must not only know how to obtain and analyze data to answer them, but also estimate how good one's answers are.

In a large scale industrial software development environment, software measurement must be meaningful, automatable, nonintrusive, and feasible. Sources of data are diffuse, nonuniform, and nonstandard. The data itself are difficult to collect and interpret, and hard to compare across projects and organizations. Nonetheless, other industries perform such measurements as a matter of course, and software development organizations should as well. In this talk I will discuss the challenges of deciding what questions to ask, how to answer them, and what the impact of answering them is. I will illustrate with examples drawn from real projects, focusing on change data and how to use it to answer some of the questions posed in the preceding.

David M. Weiss is the Lanh and Oahn Nguyen professor of software engineering at Iowa State University. Previously, he was the Director of the Software Technology Research Department at Avaya Laboratories, where he worked on improving the effectiveness of software development, particularly the effectiveness of Avaya's software development processes. To focus on the latter, he formed and led the Avaya Resource Center for Software Technology. Before joining Avaya Labs, he was the head of the Software Production Research Department at Lucent Technologies Bell Laboratories, and Director of the Reuse and Measurement Department of the Software Productivity Consortium (SPC). Before SPC Dr. Weiss spent a year at the Office of Technology Assessment, where he was co-author of a technology assessment of the Strategic Defense Initiative. During the 1985-1986 academic year he was a visiting scholar at The Wang Institute, and for many years was a researcher at the Computer Science and Systems Branch of the Naval Research Laboratory (NRL), in Washington, D.C. He has also worked as a programmer and as a mathematician. Dr. Weiss is a senior member of the IEEE.
Dr. Weiss's principal research interests are in the area of software engineering, particularly in software development processes and methodologies, software design, and software measurement. His best known work is the goal-questionmetric approach to software measurement, his work on the modular structure of software systems, and his work in software product-line engineering as a coinventor of the Synthesis process, and its successor the FAST process. He is coauthor and co-editor of two books: Software Product Line Engineering and Software Fundamentals: Collected Papers of David L. Parnas. Papers on which he has been co-author have three times won retrospective awards, twice from the IEEE and once from the ACM.
Dr. Weiss received the B.S. degree in Mathematics in 1964 from Union College, and the M.S. in Computer Science in 1974 and the Ph.D. in Computer Science in 1981 from the University of Maryland.

Previous 2013 Lectures


Table header results
Tue, 23/4/2013
11:00 AM - 12:00 PM
Israel National Cyber project,
Prof. Isaac Ben-Israel, Tel Aviv Univeristy

Abstract: בנובמבר 2010 ביקש ראש הממשלה, בנימין נתניהו, מראש המולמו"פ (מועצה לאומית למחקר ופיתוח), פרופ' יצחק בן ישראל, לרכז צוות מומחים לגיבוש אסטרטגיה לאומית לתחום הסייבר בישראל.
במאי 2011 הגיש הצוות את המלצותיו שבעקבות אישורם בממשלה הוקם, בין השאר, מטה הסייבר הלאומי.
בהרצאה יספר פרופ' בן ישראל על התהליך ותוצאותיו.

יצחק בן ישראל נולד ב-1949 בישראל (תל-אביב).
בוגר הגימנסיה העברית "הרצליה" (1967).
למד באוניברסיטת ת"א מתמטיקה, פיסיקה ופילוסופיה (בעל תואר דוקטור לפילוסופיה, 1988).
עם סיום הלימודים בתיכון התגייס לצה"ל (כעתודאי) ושרת ברציפות בצה"ל עד לפרישתו (יוני 2002).
במהלך שירותו בחיל האוויר מילא יצחק בן ישראל תפקידים במערך המבצעים, המודיעין והפיתוח. בין היתר היה ראש ענף חקר-ביצועים בחיל האוויר, ראש מחלקת מחקר במודיעין חיל האוויר וראש המו"פ (מחקר ופיתוח) בצה"ל ובמשרד הביטחון (1990-1997). בינואר 1998 הועלה לדרגת אלוף כראש מפא"ת (מחקר ופיתוח אמצעי לוחמה ותשתית טכנולוגית) במשרד הביטחון. במהלך שירותו קיבל פעמיים את פרס ביטחון ישראל, והיה אחראי בין היתר על פיתוח כ"א טכנולוגי בצה"ל ("תלפיות") והיה מיוזמי תוכנית "עתידים".
עם פרישתו מצה"ל הצטרף יצחק בן ישראל כפרופסור לסגל אוניברסיטת ת"א.
בשנת 2003 ייסד את חברת Ray-Top (Technology Opportunities) המספקת יעוץ טכנולוגי ואסטרטגי לתעשייה בארץ ובחו"ל.
באוניברסיטת ת"א עמד יצחק בן ישראל בראש מכון קוריאל ללימודים בינלאומיים (2002-2004), בראש התוכנית ללימודי ביטחון בביה"ס לממשל (2004-2007) והיה עמית מחקר במכון יפה ללימודים אסטרטגיים (2002-2004). בשנת 2002 ייסד ועמד בראש סדנת ת"א למדע טכנולוגיה וביטחון ע"ש יובל נאמן.
ביוני 2007 נבחר פרופסור בן ישראל כחבר כנסת ברשימת "קדימה" וכיהן בכנסת ה-17 עד פבר' 2009. במסגרת זו היה חבר בוועדת החוץ והביטחון (כולל וועדת המשנה לשירותים חשאיים), וועדת הכספים, וועדת המדע והטכנולוגיה ועמד בראש וועדת המשנה של ועחו"ב למוכנות העורף.
יצחק בן ישראל היה חבר בדירקטוריון התעשייה האווירית (2000-2002), דירקטוריון החברה לישראל (2004-2007), וועדת המו"פ של דירקטוריון "טבע" (2003-2007), חבר המועצה המייעצת של מוסד נאמן למחקר מתקדם במדע וטכנולוגיה בטכניון (2000-2010) ויו"ר מועצת המנהלים של חממת הטכניון (2007).
בשנת 2011 התמנה ע"י ראש הממשלה להוביל צוות לקביעת מדיניות הסייבר הלאומית של מדינת ישראל. במסגרת זו הקים את מטה הסייבר הלאומי במשרד ראש הממשלה.
יצחק בן ישראל כתב מספר רב של מאמרים בנושאי צבא וביטחון. ספרו דיאלוגים על מדע ומודיעין (הוצאת "מערכות", 1989) זכה בפרס יצחק-שדה לספרות צבאית. ספרו הפילוסופיה של המודיעין יצא בספריית האוניברסיטה המשודרת (1999) ותורגם לצרפתית (2004). ספר בעריכתו המסכם את שנת הפעילות הראשונה בסדנת ת"א למדע, טכנולוגיה וביטחון, יצא בהוצאת משרד-הביטחון (מהאדם בקרב ועד לחלל החיצון, 2007). ספרו תפיסת הביטחון של ישראל (2013) יצא בספריית האוניברסיטה המשודרת, הוצאת מודן.
יצחק בן ישראל נשוי לענבל (לבית מרכוס) ואב לשלושה בנים: יובל (1981), רועי (1984) ואלון (1988).

תפקידים נוכחיים:
- יו"ר סוכנות החלל הישראלית - סל"ה (משנת 2005).
- יו"ר המועצה הלאומית למחקר ופיתוח – מולמו"פ (משנת 2010).
- חבר במועצה המייעצת לסוכנות החלל הישראלית (משנת 2002).
- פרופסור מן המניין באוניברסיטת ת"א, בתוכנית ללימודי ביטחון ובמכון כהן להיסטוריה ופילוסופיה של המדעים והרעיונות (משנת 2002).
- תפקידים באוניברסיטת ת"א: סגן ראש בית הספר לממשל (מ-2005); ראש סדנת יובל נאמן למדע טכנולוגיה וביטחון (מ-2002); ראש התוכנית ללימודי ביטחון (2004-2007, ומשנת 2009); יו"ר ההנהלה המצומצמת של המרכז הבינתחומי לניתוח ותחזית טכנולוגית באוניברסיטת ת"א (משנת 2011) וחבר הוועדה המדעית של המרכז הבינתחומי לניתוח ותחזית טכנולוגית (משנת 2003).
- ראש הפורום האסטרטגי של המועצה הציונית (משנת 2009).
- חבר בדירקטוריון מכון פישר למחקר אסטרטגי אוויר וחלל (משנת 2000).
- חבר המועצה האקדמית של אפקה - המכללה האקדמית להנדסה בתל-אביב (משנת 2003).
- חבר בחבר הנאמנים של המרכז האוניברסיטאי אריאל (משנת 2010).
- מנכ"ל Ray-Top (מספקת ייעוץ לתעשיות הביטחוניות בארץ ובעולם).

תפקידים וחברויות בארגונים בחו"ל:
- חבר באקדמיה הבינלאומית למדעי החלל – International Academy of Astronautics (משנת 2012).
- חבר במועצה הלאומית למחקר, חדשנות ויזמות של סינגפור – Singapore Research, Innovation and.
- Enterprise Council (מיוני 2012).
- חבר בדירקטוריון (A*STAR (Agency for Science, Technology & Research בסינגפור.

פרסים:
- 1972 פרס ביטחון ישראל - על פיתוח מערכת הפצצה בפאנטום.
- 1976 פרס חיל-האוויר - על פיתוח מערכת שליטה ובקרה ממוחשבת.
- 1984 פרס ראש אמ"ן לחשיבה יוצרת.
- 1990 פרס יצחק שדה לספרות צבאית - על הספר דיאלוגים על מדע ומודיעין.
‏- 2001 פרס ביטחון ישראל (פעם שנייה) - על פרויקט המהווה ביטוי לתפיסה חדשנית של שדה הקרב העתידי.
- 2002 Singapore Defence Technology Distinguished Award על תרומתו ליחסים הביטחוניים בין שתי המדינות.
- 2008 איש המופת של אירגון "ליונס" ישראל – על תרומתו לביטחון מדינת ישראל.


Wed, 17/4/2013
11:00 AM - 12:00 PM
SpaceIL - the Israeli spacecraft to the moon,
Yariv Bash

Abstract: SpaceIL's goal is to make Israel the 3rd country to successfully land on the moon (after the USA's Apollo and an unmanned Soviet mission).
SpaceIL is aimed at inspiring the young generation in Israel and abroad by creating interest in space and science.
SpaceIL is registered as an Israeli non-profit, and is committed to donating all prize money to promote education and science.

Yariv Bash is the CEO and Co-Founder of SpaceIL - the Israeli team competing in the Google Lunar X-Prize - a privately funded, unmanned race to the moon, competing for $30 million prize established by Google.
Yariv is an Electronics and computer engineer. In his spare time, Yariv organizes and participates in technological creativity events in Israel and around the world.

Tue, 9/4/2013
11:00 AM - 12:00 PM
On the perception of risk and return,
Yoav Ganzach, Tel Aviv University

Abstract: I examine the relationship between judgments of risk and judgments of expected return of financial assets. I suggest that for unfamiliar objects, both risk and return judgments are derived from global preference, whereas for familiar assets, these judgments are derived from the ecological (i.e., objective) values of the objects' risk and expected return. In addition, I examine the role of causal schemas and the role of risk attitudes in mediating the relationships between judgments of risk and return of familiar and unfamiliar objects. The data are derived from experiments in which highly trained financial analysts provide evaluations of stocks and other financial assets. Conceptual and practical questions concerning the nature, the meaning, and the assessment of risk and expected return are discussed.

Yoav Ganzach is the Lilly and Alejandro Saltiel Professor of Corporate Leadership and Social Responsibility at Tel Aviv University. He received his Ph.D. at Columbia University and taught at the Hebrew University and the City University of NY. His research lies in the areas of behavioral decision making, organizational behavior and personality and individual differences, and he authored numerous publications in these areas.

Tue, 2/4/2013
11:00 AM - 12:00 PM
Big Data Driven Methods for Cyber Security,
Amir Averbuch, School of Computer Science, Tel Aviv University

Abstract: Sophisticated malware such as viruses, worms, backdoor, Trojans, spyware, appear in a stealthy way in the data inside the organization. The basic approach to protect and secure critical infrastructure and networking data against cyber attacks of the last 45 years called "walls and gates" (barriers between trusted and untrusted components, with policy-mediated pass-through) have failed. There is no reason to think that they will be more successful in the future. Rule based methodologies that govern firewalls and IDS/IPS are irrelevant today to detect sophisticated malwares that pretend to be regular streaming and penetrate every commercial barrier on the market that are based on signatures of intrusions that detect yesterday attacks but fail to detect zero day attacks.

We will show that cyber security is a problem that can be treated via Big Data Analytics. We will show that the data dictates the generation of algorithms to detect malicious malware. We will show that in data avalanche there are opportunities for malware detection. We will show that unification of several mathematical methodologies can produce algorithms for malware detection in big data.

We describe a methodology that automatically identifies anomalies. The core technology is based upon manifold learning that identifies the geometry of big data. The main technology core is based upon training the system to extract heterogeneous features to detect patterns that deviate from normality by behavioral analysis of heterogeneous complex dynamic networking data. The system uses efficient computation that is based on multiscale computation, dictionary learning and kernel approximation, patch processing, adaptive subsampling and clustering and profile updating. Promising preliminary results increase the potential of the proposed system to fill the gap that current state-of-the-art IDS/IPS and firewalls are unable to fill.

Joint work with A. Bermanis, G. David, M. Shalov, E. Shabat, G. Shabat, G. Wolf.

Amir Averbuch is a professor of computer science, School of Computer Science, Tel Aviv University. Research interests: Applied and computational harmonic analysis, big data processing and analysis, wavelets, signal/image processing and scientific computing.

Wed, 20/3/2013
11:00 AM - 12:00 PM
Patent Challenges in the Biopharmaceutical Industry,
Dr. Nadav Ben Haim

Abstract: We will discuss the complex and evolving interface between biopharmaceutical industry and the global patent system. The biopharmaceutical industry is a relatively new industry that has revolutionized the world of medicine in the last 30 years. However the development of this industry created new challenges and questions that need to be addressed in part by legislation and court decisions. We will review these challenges and their influence at the industry level as well as at the company and project levels.

Nadav is a patent attorney in the field of biologics. Nadav currently works with Symango ltd., a company which he founded to provide IP consulting and management services to biomedical companies and entrepreneurs. Previously Nadav worked as a patent analyst at TEVA Pharmaceutical Industries Ltd, where he was responsible for IP management of innovative and biosimilar global drug development projects. Nadav conducted his postdoctoral research at the Nanomedicine research group at the University Hospital in Basel and obtained a PhD in molecular biology from the Swiss Cancer Research Institute at Lausanne.

Tue, 12/3/2013
11:30 AM - 12:30 PM
Global Warming: Why? How Much? and Who Cares?,
Dr. Eitan Israeli, IBM Research - Haifa

Abstract: ההתחממות הגלובלית היא אחד האתגרים הגדולים ביותר של האנושות במאה ה- 21.
בהרצאה אעמוד על שאלות כגון מה הגורמים לה, מה היא צופנת לילדים ולנכדים שלנו, ואיפה עדיין יש אי וודאות ו/או ניתן להשפיע. אדבר גם על מדוע איננו רואים את מלוא הסכנה, ומהו תפקידם של המדענים במענה לאתגר.
יש לציין כי ההרצאה לא תייצג את עמדת הארגון ואיננה קשורה לעבודתו של איתן ביבמ.

איתן ישראלי: ד"ר לחקר ביצועים, בעברו ראש מחלקת ניתוח מערכות באג"ת בדרגת אל"ם וכיום מנהל קבוצת אופטימיזציה במעבדות המחקר של יבמ חיפה

Tue, 5/3/2013
11:30 AM - 12:30 PM
Recent Progress in Maximization of Submodular Functions,
Prof. Seffi Naor, Computer Science Department, Technion

Abstract: The study of combinatorial problems with submodular objective functions has attracted much attention recently, and is motivated by the principle of economy of scale, prevalent in real world applications. In particular, submodular functions are commonly used as utility functions in Economics and algorithmic game theory. From a theoretical perspective, submodular functions and submodular maximization play a major role in combinatorial optimization, where several well known examples of submodular functions in this setting include cuts in graphs and hypergraphs, rank functions of matroids and covering functions. Several new results along this line of research will be discussed, in particular a new result on maximizing an unconstrained non-monotone submodular function.

Seffi Naor received his B.Sc in computer science (cum laude) from the Technion, and his M.Sc.(cum laude) and Ph.D, both in computer science, from the Hebrew University of Jerusalem. He is currently a professor of computer science at the Technion - Israel Institute of Technology, Haifa, Israel, where he has been on the faculty since 1991. Prior to that he was a post-doctoral fellow at the University of Southern California and at Stanford University. During 1998-2000 Seffi Naor was a member of the technical staff at Bell Laboratories, Lucent Technologies, and during 2005-2007 he was a visiting researcher at Microsoft Research. Seffi Naor is a frequent visiting scientist at the IBM T. J. Watson Research Center and Microsoft Research. His research interests are mainly in the design and analysis of efficient algorithms, in particular approximation algorithms for NP-Hard algorithms and on-line algorithms, algorithmic game theory, and complexity theory. Seffi Naor has published over 100 papers in top professional journals and conferences. He is currently on the editorial board of Algorithmica and the Journal of Discrete Algorithms.

Tue, 26/02/2013
11:00 AM - 12:00 PM
Seeing the Invisible; Predicting the unexpected,
Prof Michal Irani, The Weizmann Institute of Science

Abstract: In this talk I will show how complex visual inference tasks can be performed with no prior examples, by exploiting redundancy within and across different parts of the visual data. Comparing and integrating local pieces of visual information gives rise to complex notions of visual similarity and to a general "Inference by Composition" approach. This allows to infer about the likelihood of new visual data that was never seen before, make inferences about complex static and dynamic visual information without any prior examples or prior training. I will demonstrate the power of this approach to several example problems (as time permits):

  1. Detecting complex objects and actions
  2. Prediction of missing visual information
  3. Inferring the "likelihood" of "never-before-seen" visual data
  4. Detecting the "irregular" and "unexpected"
  5. Super-resolution (from a single image)
  6. Segmentation of complex visual data
  7. Generating visual summaries (images and video)


Tue, 19/02/2013
11:00 AM - 12:00 PM
3d Printing and Digital Materials,
Dr. Ofer Shochet, Executive Vice President, Products, Stratasys

Abstract: Today we witness the enormous impact of Additive Manufacturing (AM) technologies on "how we fabricate" and "what we fabricate", the latter of which has been fundamentally affected by the freedom of form variation enabled by many AM techniques. Now we are witness to the removal of another major constraint as we add the freedom of material variation. This becomes possible in multi-material AM systems, where each element can have different physical properties. In inkjet-based 3D printing systems liquid droplets of different materials are simultaneously deposited and solidified. The deposition sequence and resulting spatial distribution of the droplets are controlled by dedicated software. In effect, we turn digital voxels into physical blocks and we call the resulting structure Digital Material (DM). Naturally, a pair of materials with different physical properties can yield numerous DMs, each having a unique set of properties, inherited from its parent materials.

Ofer Shochet has led the Stratasys/Objet Products division since 2009 and responsible for Systems R&D, material development and R&D product management. Prior to Objet, Ofer spent two years as the founder in parallel of several companies in multiple high-tech fields. One of those (Navajo Systems) was recently acquired by Salesforce.com. This followed six years at Verint Systems, where he served as Senior Vice-President and built the corporate technology group. Prior to that Ofer was founder and Executive VP of Vigil Technologies. Ofer was also VP and R&D manager at Silicon Graphics where he focused on 3D medical imaging, Computer Graphics and High-Performance Computing. Ofer holds a PhD and MSc (Magna cum Laude) in Physics from Tel Aviv University.

Wed, 06/02/2013
11:00 AM - 12:00 PM
Free services - the challenge of the legal protection of privacy in the information economy,
Yoram HaCohen, Israel Law, Information and Technology Authority, Ministry of Justice

Abstract: "If the service is free, YOU are the product!" – are we aware of the potential democratic and social implications of this economic model?

In this lecture, Yoram Hacohen, Israel's retiring data protection commissioner (former head of the Israeli Law, Information and Technology Authority, ILITA) will brief the history of international and national legal regime of personal data protection, and will describe the current challenges of protecting privacy in the era of big data and information-based economy.

We will start by describing the evolution of the international regime of privacy protection, and will try to understand what privacy is, what does it serve and is it still needed? We will look at the new models of free services such as Google and Facebook, and question potential or existing market failures in this model. If time permits, we may talk about the technological, legal and social mechanism to fence the challenge.

Yoram Hacohen was appointed in 2006 to establish and lead Israel's new data protection authority, the Israeli Law, Information and Technology Authority (ILITA). ILITA is the israeli regulator for e-privacy and e-signature.
As head of ILITA, Yoram represents ILITA in the Israeli parliament (the Knesset) and at the Media.
Yoram represents Israel at international fora including the international conference of data protection commissioners, the ICCP committee of the OECD, IAPP and the European commission.
In the past, Yoram was a data security and electronic publishing entrepreneur. He established Israel's first PKI certification authority for digital signatures, and its first electronically published legal database.
Yoram is one of the founders of the Haifa center for law and technology, and teaches course in Israeli universities about "electronic evidence and computer crimes".
Yoram is married and a proud father of 3 kids and admires music, cars, movies and science - real and fiction.

Mon 28/01/2013
03:00 PM - 04:00 PM
The Euclidean k-Supplier Problem,
Baruch M Schieber, IBM T.J. Watson Research Center

Abstract: In the k-supplier problem, we are given a set of clients C and set of facilities F located in a metric, along with a bound k<= |F|. The goal is to open a subset of k facilities so as to minimize the maximum distance of a client to an open facility. We consider the k-supplier problem in Euclidean metrics and present for it a 1+sqrt{3} ~ 2.73 approximation algorithm. This is an improvement over the 3-approximation algorithm of Hochbaum and Shmoys which also holds for general metrics (where it is known to be tight). By a result of Feder and Greene, it is NP-hard to approximate the Euclidean k-supplier problem to better than a factor of sqrt{7} ~ 2.65, even in 2-dimension. Our algorithm is very simple and is based on a relation to the edge cover problem. We also present a nearly linear time algorithm for Euclidean k-supplier in constant dimensions that achieves an approximation ratio ~2.965. The previously known nearly linear time approximation algorithm in this setting given by Feder and Greene yields a 3-approximation.

This is joint work with Viswanath Nagarajan and Hadas Shachnai

Mon 28/01/2013
11:00 AM - 12:00 PM
Science education in Israeli schools - an amendable gap,
Dr Joseph Shapira, electromagnetic and communication expert, formerly the head of the national committee for radio sciences

Abstract: רק שישה אחוזים מתלמידי ישראל לומדים פיזיקה ברמה של חמש יחידות בגרות; וגם אלו שלומדים - משננים נוסחאות במקום להבין.

מה שיבמ, וחברות הי-טק דומות צריכות הוא בוגרים בעלי כישורי איסוף מידע, סינונו ובחינתו, חשיבה חוקרת סדורה, ראית תמונה רחבה והקשרים, יצירתיות, תקשורת-עמיתים, כישורי הצגת רעיון ומידע.

ההרצאה תסביר כיצד ניתן להגיע לכך על ידי תוכניות חקר בפיזיקה.



Wed, 16/01/2013
11:00 PM - 12:00 PM
Quantitative Formal Verification,
Dr. Udi Boker

Abstract: Traditional formal verification is Boolean, handling Boolean properties of the verified systems and providing a Boolean answer for whether a system satisfies, or not, a given specification. In recent years, there is growing need and interest in verifying quantitative properties of systems, as well as reasoning about the quality level of the satisfaction.

We investigate the extension of formal verification into a quantitative paradigm. To this end, we introduce extensions of temporal logics, which play a key role in Boolean specifications, and analyze the decidability and complexity of the induced verification problems.

For addressing quantitative satisfaction, we introduce temporal logics in which the satisfaction value of a formula is a number between 0 and 1, describing the quality of the satisfaction. These logics generalize standard temporal logics by augmenting them with an arbitrary set of functions over the interval [0,1]. For example, a formula may specify the minimum between the satisfaction values of sub-formulas, their product, and their weighted average. For handling quantitative properties, we extend temporal logics with atomic assertions on accumulated values, allowing to specify requirements related to the accumulated sum or average of numeric properties along a computation.

In the talk, I will present our current results, the relation to other formalisms, such as probabilistic, fuzzy, timed, and concurrent systems, and the road ahead of us.

Based on a joint work with Shaull Almagor, Krishnendu Chatterjee, Thomas A. Henzinger, and Orna Kupferman.

Udi Boker: is a postdoc fellow at IST Austria, working in the group of Thomas A. Henzinger. He received his PhD in Computer Science from the Tel Aviv University, under the supervision of Nachum Dershowitz, and continued to a postdoc in the Hebrew University, working in the group of Orna Kupferman.

In the industry, he was an R&D director in Mercury Interactive (later acquired by HP), initiating and leading the development of a new approach to load-testing over the Web.

Udi's research concerns formal methods, automata theory, computational models, computability, and logic.

Tue, 15/01/2013
02:00 PM - 03:00 PM
Tractable solutions to some challenging optimization problems,
Prof Aharon Ben-Tal, Faculty of Industrial Engineering and Management, Technion

Abstract: The need to solve real-life optimization problems poses frequently a severe challenge, as the underlying mathematical programs threaten to be intractable. The intractability can be attributed to any of the following properties: large dimensionality of the design dimension; lack of convexity; parameters affected by uncertainty. In problems of designing optimal mechanical structures (truss topology design, shape design, free material optimization), the mathematical programs typically have hundreds of thousands of variables, a fact which rules out the use of advanced modern solution methods, such as Interior Point. The same situation occurs in Medical Imaging (reconstruction of clinically acceptable images from Positron Emission Tomographs). Some Signal Processing and Estimation problems may result in nonconvex formulations. In the wide area of optimization under uncertainty, some classical approaches, such as chance (probabilistic) constraints, give rise to nonconvex NP-hard problems. Nonconvexity also occurs in some Robust Control problems.

In all the above applications we explain how the difficulties were resolved. In some cases this was achieved by mathematical analysis, which converted the problems (or its dual) to a tractable convex program. In other cases novel approximation schemes for probability inequalities were used. In the case of huge-scale convex programs, novel algorithms were employed. In the Robust Control example, a reparameterization scheme is developed under which the problem is converted to a tractable deterministic convex program.

Aharon Ben-Tal: is a Professor of Operations Research and Head of the MINERVA Optimization Center at the Faculty of Industrial Engineering and Management at the Technion – Israel Institute of Technology, and holder of the Dresner Chair. He received his Ph.D. in Applied Mathematics from Northwestern University in 1973. He has been a Visiting Professor at the University of Michigan, University of Copenhagen, Delft University of Technology and MIT. Currently he is a Visiting Distinguished Scientist at CWI Amsterdam. His interests are in Continuous Optimization, particularly nonsmooth and large-scale problems, conic and robust optimization, as well as convex and nonsmooth analysis. Recently the focus of his research is on optimization problems affected by uncertainty. In the last 15 years, he has devoted much effort to engineering applications of optimization methodology and computational schemes. Some of the algorithms developed in the MINERVA Optimization Center are in use by Industry ( Medical Imaging, Aerospace). He has published more than 110 papers in professional journals and co-authored three books: Optimality in Nonlinear Programming: A Feasible Direction Approach (Wiley-Interscience, 1981) Lectures on Modern Convex Optimization: Analysis, Algorithms and Engineering Applications (SIAM-MPS series on optimization, 2001) and Robust Optimization (Princeton University press,2009). Prof. Ben-Tal was Dean of the Faculty of Industrial Engineering and Management at the Technion (1989-1992). He served as a council member of the Mathematical Programming Society (1994-1997). He was Area Editor (Continuous Optimization) of Math. of Operations Research (1993-1999), member of the Editorial Board of SIAM J. Optimization, J. Convex Analysis, OR Letters, Mathematical Programming, Management Science and Math. Modeling and Numerical Analysis, European J. of Operations Research and Computational Management Science.

In 2007 Professor Ben-Tal was awarded the EURO Gold Medal - the highest distinction of Operations Research within Europe.
In 2009 he was named Fellow of INFORMS.

Mon, 14/01/2013
02:00 PM - 03:00 PM
Taming Non-classical Logics,
Anna Zamansky, Postdoctoral Fellow, Institute for Computer Languages, Technical University of Vienna

Abstract: In recent decades a vast variety of non-classical logics have been introduced, driven by various CS applications. Temporal logics, separation logics, fuzzy logics and paraconsistent logics are just a few prominent examples, used in verification of software and hardware, medical expert systems, data and knowledge bases, etc. A useful logic should ideally have two components: a simple and intuitive semantics, which can provide real insights into the logic, and a corresponding analytic proof system which is the key to effective proof search strategies for automated deduction methods. Obtaining these components for a given logic is a challenging process, which is usually tailored to the particular logic at hand. However, due to the increasing number of new application-driven logics, there is a need for a systematic approach to obtaining these components, which could be used for developing tools for automatic support for the design and investigation of logical systems.

In this talk we show that this goal can be achieved at least for some useful families of non-classical logics. We provide a uniform and modular method for a systematic generation of effective semantics and analytic proof systems for a very large family of paraconsistent logics used for reasoning with inconsistent information, thus making a substantial step towards the development of efficient paraconsistent theorem provers. The method, implemented by the Prolog system PARAlyzer, has been extended to infinitely many other logics formulated in terms of axiomatic systems of a certain natural form.

Wed, 09/01/2013
11:00 AM - 12:00 PM
What does concept computing promise and can it disrupt the universe of event-driven applications?,
Dr. Opher Etzion, IBM Research - Haifa

Abstract: The search for new paradigm came from a pragmatic problem in the event processing space – current technology is too complex! Concept computing is a new paradigm, neither having a stable name and an entry in Wikipedia, attempting to address an old program – how to construct and maintain computing applications by non programmers.

The talk will include an introductory part and a main part. The introductory part is aimed both at understanding the motivation for this search and understanding the notion of concept computing. It contains by a short survey about the state of the practice of event processing and the major challenges as well as some historical perspective on attempts to have non programmer's models, and understand what has worked and what did not. The main part describes the EFAL (Events For All) project using some examples. It surveys the knowledge model that consists of descriptive and declarative parts, and relates back to the state-of-the-art in event processing and outline the planned roadmap and points of disruption.

Ymir Vigfusson is an Assistant Professor at the School of Computer Science at Reykjavik University. He received a B.Sc. in Mathematics from the University of Iceland (2005) and a Ph.D. in Computer Science from Cornell University (2009), where he researched ways to exploit group similarity and improve scalability in distributed systems. His dissertation was nominated for the ACM Doctoral Dissertation Award by Cornell. Before his appointment at Reykjavik University, Ymir was a post-doctoral scientist at IBM Research - Haifa (2009-2011). Ymir's research projects include creating and optimizing systems and algorithms for distributed settings, and getting multicast and content distribution to work in a variety of environments. His work has been partially supported by a Fulbright Scholarship, a Yahoo! Research grant and a Grant-of-Excellence from the Icelandic Research Center. In his spare time, Ymir plays the piano, dances ballroom and flies small airplanes.

2012 Lectures


Table header results
Mon, 31/12/2012
11:00 AM - 12:00 PM
Exploring Human Evolution and Deciphering the Human Genome Using Complete Individual Genome Sequences,
Ilan Gronau, postdoctoral research associate at the Siepel computational genomics lab, Department of Biological Statistics and Computational Biology, Cornell University

Abstract: High throughput DNA sequencing has transformed the landscape of genomic data by providing an affordable means to sequence the genomes of numerous species and multiple individuals per species. There has been a particularly dramatic increase in the last five years in the availability of individual human genomes and the genomes of closely related primate species. These data provide a rich source of information about human evolution and the forces that helped shape the human genome. This talk will focus on two specific problems I explored during my postdoctoral research using these new data sets.

The first problem I will be presenting is recovery of ancient human demography and the evolutionary relationships between different human population groups. We recently developed a new demography inference method, called G-PhoCS (Generalized Phylogenetic Coalescent Sampler), which makes use of a small number of complete individual human genomes. Applying this method to the complete genomes of six human individuals from major human population groups, we were able to recover very ancient trends in human demography dating back as far back as 130 thousand years ago. The second problem I will describe is that of inferring recent evolutionary pressures acting on regulatory elements in the human genome. Much of the DNA in the human genome is devoted to the regulation of gene expression, but regulatory DNA elements are typically short, dispersed and often not conserved across long evolutionary timescales. This has made it very difficult for researchers to study evolutionary pressures shaping regulatory DNA in the human genome. We recently developed a new inference scheme, called INSIGHT (Inference of Natural Selection from Interspersed Genomically coHerent elemenTs), that addresses these challenges by making use of individual human genomes and the genomes of closely related primates. This method was used to perform the first comprehensive study of natural selection acting on transcription factor binding sites, which are the most well characterized regulatory elements in the human genome. Our study sheds light on the selective forces that shaped these elements, and has possible implications to the study of human disease.

This talk will highlight the methodological and algorithmic challenges in these problems, and will not require any prior biological knowledge.

Ilan Gronau is a computational biologist developing computational methods that use signatures of evolution to reconstruct the history of species and populations, and shed light on the way genomic function evolves. Ilan has a Masters degree in Applied Mathematics and Computer Science from the Weizmann Institute, and a PhD in Computer Science from the Technion. He spent the last three years as a postdoc in Adam Siepel's computational genomics lab in Cornell. His work makes use of statistical models for population genetics and evolution of DNA sequences, sophisticated algorithmic techniques, and cutting-edge genomic data sets.

Tue 04/12/2012
11:00 AM - 12:00 PM
How Alan Turing Cracked the Enigma Code,
Professor Ymir Vigfusson, Reykjavik University

Abstract: Code breakers played an enormously crucial role in World War II. Alan Turing, the father of computer science, was at the center of allied code breaking operations and his breakthroughs made intelligence gathering not only possible but practical. This general audience talk, celebrating Alan Turing's Centenary, explains the notorious German Enigma code and how it was systematically cracked by the Allies.

Ymir Vigfusson is an Assistant Professor at the School of Computer Science at Reykjavik University. He received a B.Sc. in Mathematics from the University of Iceland (2005) and a Ph.D. in Computer Science from Cornell University (2009), where he researched ways to exploit group similarity and improve scalability in distributed systems. His dissertation was nominated for the ACM Doctoral Dissertation Award by Cornell. Before his appointment at Reykjavik University, Ymir was a post-doctoral scientist at IBM Research - Haifa (2009-2011). Ymir's research projects include creating and optimizing systems and algorithms for distributed settings, and getting multicast and content distribution to work in a variety of environments. His work has been partially supported by a Fulbright Scholarship, a Yahoo! Research grant and a Grant-of-Excellence from the Icelandic Research Center. In his spare time, Ymir plays the piano, dances ballroom and flies small airplanes.

Mon 03/12/2012
15:00 PM - 16:00 PM
Can we exploit the basal ganglia machine learning tricks to cure human brain disorders?,
Professor Hagai Bergman, The Hebrew University – Hadassah Medical School and the Edmond and Lily Safra Center (ELSC) for Brain Research

Abstract: Continuous high-frequency Deep Brain Stimulation (DBS) is an established and effective therapy for the management of the clinical symptoms of advanced Parkinson's disease (Weaver et al., 2012;Bronstein et al., 2011;Weaver et al., 2009). However, since in present DBS systems stimulation parameters are only intermittently adjusted, DBS methods are poorly suited to cope with the fast neuronal and clinical dynamics of Parkinson's disease and other brain disorders.

Parkinson's disease is caused by the death of midbrain dopaminergic neurons and the consequent depletion of dopamine in the striatum – the input stage of the basal ganglia. The dopamine depletion in the striatum is leading to a cascade of changes in the neural activity of the basal ganglia that are expressed as the clinical symptoms of Parkinson's disease. Our working hypothesis holds that the basal ganglia use actor/critic architecture enabling multi-objective optimization of the trade-off between gain and cost of behavior (Parush et al., 2011). The dopamine neurons (and other modulators of the basal ganglia) encode the mismatch between prediction and reality (critic), whereas the main axis of the basal ganglia networks (Goldberg and Bergman, 2011) provide the connection between the neural encoding of the current state of the subject and the motor apparatus (actor).

We tested the effects of closed-loop stimulation in the MPTP primate model of Parkinson's disease (Rosin et al., 2011). Closed-loop stimulation has a significantly greater effect on akinesia and on cortical and basal ganglia discharge than standard open-loop DBS and matched control stimulation paradigms. Thus, the Parkinsonian basal ganglia can be observed and controlled. Closed-loop DBS paradigms have therefore a potential not only for the treatment of Parkinson's disease, but perhaps of other neurological/psychiatric disorders in which a clear pathological pattern of brain activity is recognized.

Reference List:
  • Bronstein JM, et al. (2011) Deep brain stimulation for Parkinson disease: an expert consensus and review of key issues. Arch Neurol 68:165.
  • Goldberg JA, Bergman H (2011) Computational physiology of the neural networks of the primate globus pallidus: function and dysfunction. Neuroscience 198:171-192.
  • Parush N, Tishby N, Bergman H (2011) Dopaminergic Balance between Reward Maximization and Policy Complexity. Front Syst Neurosci 5:22.
  • Rosin B, Slovik M, Mitelman R, Rivlin-Etzion M, Haber SN, Israel Z, Vaadia E, Bergman H (2011) Closed-loop deep brain stimulation is superior in ameliorating parkinsonism. Neuron 72:370-384.
  • Weaver FM, et al. (2009) Bilateral deep brain stimulation vs best medical therapy for patients with advanced Parkinson disease: a randomized controlled trial. JAMA 301:63-73.
  • Weaver FM, et al. (2012) Randomized trial of deep brain stimulation for Parkinson disease: Thirty-six-month outcomes. Neurology 79:55-65.

Tue 13/11/2012
11:00 AM - 12:00 PM
Lossy Compression for BigData: First Steps,
Professor Tsachy Weissman, Information Systems Laboratory, Department of Electrical Engineering, Stanford University

Abstract: Two key challenges in fitting BigData problems into a lossy compression framework are (i) the selection of an appropriate distortion measure, and (ii) characterizing the performance of distributed systems. Inspired by real systems, like web search, which return a list of likely data entries indexed by likelihood, we study the "logarithmic loss" distortion function in a multiterminal setting, thus addressing both challenges. In particular, we characterize the rate-distortion region for two (generally open) multiterminal source coding problems when distortion is measured under logarithmic loss. In addition to the main results, I'll discuss some implication to machine learning and estimation.

Based on joint work with Tom Courtade.

Tue 30/10/2012
11:00 AM - 12:00 PM
Gödel, Escher, Bach: Gavish Ben-Almavet, a Hebrew translation of Gödel, Escher, Bach: an Eternal Gololden Braid ,
Tal Cohen & Yarden Nir-Buchbinder, Google

Abstract: Gödel, Escher, Bach: An Eternal Golden Braid was originally published in 1979, and almost immediately won great critical acclaim, a Pulizer for nonfiction writing, and cult status in some communities as diverse as musicians and computer programmers. The book deals with a myriad of topics and presents a fascinating challenge to readers; but its linguistic playfulness presents an even greater challenge to translators. It was considered to be "untranslatable" for a long time, yet by now it was translated to over a dozen languages. In this talk, we discuss the book in general: what is it really about, and how well did the book and the theory it presents withstand the test of time. We then discuss our own translation of the book to Hebrew, an undertaking that lasted for nearly 16 years, was recently published by Dvir (a Kinneret-Zmora-Bitan label), and have reached a best-seller status in Israel.

Tue 09/10/2012
11:00 AM - 12:00 PM
Predicting Future Events from Large-scale Digital Histories,
Kira Radinsky, Technion

Abstract: It has been a long time quest of artificial intelligence to develop systems that can emulate human reasoning. Fundamental capabilities of such intelligent behavior are the abilities to understand causality and to predict. Those are essential for many artificial intelligence tasks that rely on human common-sense reasoning, such as decision making, planning, question-answering, inferring user intentions and responses.

Much of the causal knowledge that helps humans understand the world is recorded in texts that express people's beliefs and intuitions. The World Wide Web encapsulates much of our human knowledge through news archives and encyclopedias. This knowledge can serve as the basis for performing true human-like prediction - with the ability to learn, understand language, and possess intuitions and general world knowledge.

In this talk I will present a learning system, which given an event, represented in natural language, predicts a possible future event it can cause. During its training, we constructed a semantically-structured causality graph of 300 million fact nodes connected by more than one billion edges, based on 150 year old news archive crawled from the web. We devised a machine learning algorithm that infers causality based on this graph. Using common-sense ontologies , it generalizes the events it observes, and thus able to reason about completely new events. We empirically evaluate our system on the 2010 news, and compare our predictions to human predictions. The results indicate that our system predicts similarly to the way humans do.

Kira Radinsky is the founder and CTO of the data-mining startup SalesPredict. Prior to that, she has been a researcher in Microsoft Research, Israel. She is finishing her PhD at the Technion, focusing on mining the web to predict future events. She has won several prestigious prizes (including Google Anita Borg prize, Yahoo! Key Scientific Challenge Award and Facebook data-mining award), filed over 10 patents, and served as a reviewer and PC member for many major AI and IR conferences, including WWW, KDD, ICAPS, SIGIR and AAAI. She has more than 10 years of a varied industry experience: developing large-scale computer security infrastructures, open source developing, co-founding and serving as the CTO of a CMS startup, developing semantic recommendation systems in Webshakes, and conducting research in Microsoft Research.

Thu 13/09/2012
11:00 AM - 12:00 PM
Using Spatial Health Intelligence in Health Planning; The Case Study of Logan Beaudesert, Australia,
Dr. Ori Gudes, Queensland, Australia

Abstract: Geographical information systems offer an innovative approach to improving the decision making processes of health planners and policy makers. A health decision support system based on Esri ArcGIS server named EpidorosTM (formerly known as the Health Decision Support System [HDSS]) has been developed in Logan-Beaudesert, Queensland, Australia, an area with relatively high incidence of chronic disease. Health planners use EpidorosTM tools such as spatio-temporal analysis, hot spots analysis, catchment analysis and proximity analysis as evidence to support their health planning processes. When used in conjunction with diverse health-related data sets, these tools highlight the nature of relationships between socio-environmental factors and health outcomes so that decision-makers can proactively mitigate risks and devise holistic and sustainable solutions whilst optimising their resources. EpidorosTM has resulted in a significant increase in spatial intelligence as applied to health care planning and resource allocation in the prevention of chronic disease.

Dr. Ori Gudes is a Geographic Information Systems (GIS) expert and an urban planner with research focused on health and GIS, from Queensland - Australia. He has completed his PhD (2012) at Queensland University of Technology in the Urban Development School. The main project that he has been working on is the Health Decision Support System (HDSS). The primary challenge this project addressed was to create a simple, engaging, and usable interface that helped group of health planners in the Logan area, to make informed decisions based on evidence. The interface links users to an extensive GIS database which draws on a range of sources, including Australian Bureau of Statistics Census data, local data about land usage and resources or facilities and health data. Recently, the project has won the Queensland Spatial Excellence Award under the research and innovation category.

For more information about the project, see: http://www.spatialintelligence4health.com

Tue 04/09/2012
11:00 AM - 12:00 PM
Environmental Monitoring using Existing Measurements from Wireless Communication Networks,
Prof. Hagit Messer-Yaron, president of the Open University

Abstract: Accurate measurements of precipitation is a topic of great importance - both for basic research, to better understand the global climate and its dynamics (climate change); and for applications as weather forecast, flood warning etc. While traditionally precipitation monitoring is done by costly special purpose equipment as gauges, radar and satellites, lately [1] it has been suggested to use existing measurements from wireless communication networks for environmental monitoring.

In this talk I present recent results in detection, estimation and classification of precipitation as rain, snow, sleet and fog, using this approach. These results are based on the availability of spatially and temporally diverse measurements and employ multidimensional signal processing techniques adopted from sensor networks, detection and parameter estimation, classification, robust estimation, distributed detection and more. I will also point out on existing challenges and opportunities in this research area, mainly in the fields of source separation and vector sensors, as well as in sampling theory and application.

[1] Messer, H; Zinevich, A; Alpert, P Environmental monitoring by wireless communication networks, SCIENCE, 312 (5774): 713-713 MAY 5 2006.

Tue 14/08/2012
11:00 AM - 12:00 PM
From NAND to Tetris in 12 Steps,
Prof. Shimon Schocken, School of Computer Science, IDC Herzliya

Abstract: I'll present a course that synthesizes many abstractions, algorithms, and data structures learned in CS courses, and makes them concrete by building a complete computer system from the ground up. As the semester progresses, we guide the students through a modular series of projects that gradually construct and unit-test a simple hardware platform and a modern software hierarchy, yielding a surprisingly powerful computer system. The hardware projects are done in a simple hardware description language and a hardware simulator supplied by us. The software projects (assembler, VM, and a compiler for a simple object-based language) can be done in any language, using the APIs and test programs supplied by us. We also build a mini-OS. The result is a GameBoy-like computer, simulated on the student's PC. We start the course (and this talk) by demonstrating some video games running on this computer, e.g., Tetris and Pong.

Building a working computer from NAND gates alone is a thrilling intellectual exercise. It demonstrates the supreme power of recursive ascent, and teaches the students that building computer systems is -- more than anything else -- a triumph of human ingenuity. We are able to squeeze all this into a single course since we deal with neither efficiency nor advanced features, leaving these subjects to other courses in the program. The resulting approach is completely self-contained, requiring only programming as a pre-requisite. All the course materials -- software, lectures, and projects are available freely on the web in open source. The course and the approach are described in this book, available in full text on the web. Joint work with Noam Nisan.

Wed 27/06/2012
11:00 AM - 12:00 PM
Algorithmic Mechanism Design,
Dr. Amir Ronen, IBM Research - Haifa

Abstract: Algorithmic game theory is an emerging field of research that experienced explosive growth during the last decade. In this talk, I will first give a brief overview of this field. I will then focus on two papers by Noam Nisan and myself: Algorithmic Mechanism Design, which received the Godel award this year, and Computationally Feasible VCG Mechanisms, which received the IJCAI-JAIR best paper prize last year.
The talk will be self contained.

Tue 26/06/2012
11:00 AM - 12:00 PM
Multi-tasking in the Digital Information Age: Tasks, Information, and Interaction Contexts,
Prof. Gloria Mark, University of California, Irvine

Abstract: Multi-tasking is a way of life for information workers. In this talk I will present a set of empirical results from fieldwork observations and experiments which detail the extent to which information workers multitask with digital data and will discuss how multi-tasking impacts various aspects of collaboration and communication in the workplace. Multi-tasking changes with collocation, gender, and interruptions. I will report how people compensate for interruptions by working faster, but this comes at a cost of experiencing higher stress. I will also report on a recent study where we cut off email of people in an organization for one week to understand how email affects multitiasking. We found that without email in the workplace, people multitasked less and experienced lower stress. These results challenge the traditional way that most IT is designed to organize information, i.e. in terms of distinct tasks. Instead, I will discuss how IT should support information organization in a way consistent with how most people were found to organize their work, which is in terms of working spheres, thematically connected units of work. I will also discuss how the results present opportunities for new social and technical solutions to support multi-tasking in the workplace.

Gloria Mark is a Professor in the Department of Informatics, University of California, Irvine. Her principle research areas are in human-computer interaction and computer-supported cooperative work. Her research focuses on the design and evaluation of collaborative systems. Her current projects include studying multi-tasking of information workers, IT use for resilience and adaptation in disrupted environments, and mobile platforms for telemedicine. She received her PhD in Psychology from Columbia University. Prior to joining UCI in 2000, she worked at the GMD in Bonn, Germany (now Fraunhofer Institute). In 2006 she received a Fulbright scholarship where she worked at the Humboldt University in Berlin, Germany. She has been the technical program chair for the ACM CSCW'12, ACM CSCW'06 and ACM GROUP'05 conferences, and is on the editorial board of ACM TOCHI, Human-Computer Interaction and Computer Supported Cooperative Work: The Journal of Collaborative Computing. She is the author of over 100 peer-reviewed publications and her work has also appeared in the popular press such as The New York Times, the BBC, Time, and The Wall Street Journal.

Wed 20/06/2012
11:00 AM - 12:00 PM
HUMANE COMPUTING,
Prof. Ophir Frieder, Department of Computer Science, Georgetown University

Abstract: Humane Computing is the design, development, and implementation of computing systems that directly focus on improving the human condition or experience. In that light, three efforts are presented, namely, spelling correction in adverse environments, spam detection algorithms for peer-to-peer file sharing systems, and novel techniques for urinary tract infection treatment.

The first effort addresses spelling correction in adverse environments. Two environments are discussed: foreign name search and medical term search. In support of the Yizkor Books project of the Archives Section of the United States Holocaust Memorial Museum, novel foreign name search approaches that favorably compare against the state of the art are developed. By segmenting names, fusing individual results, and filtering via a threshold, our approach statistically significantly improves on traditional Soundex and n-gram based search techniques used in the search of such texts. Thus, previously unsuccessful searches are now supported. Using a similar approach, within the medical domain, automated term corrections are made to reduce transcription errors.

In the second effort, spam characteristics in peer-to-peer file sharing systems are determined. Using these characteristics, an approach that does not rely on external information or user feedback is developed. Cost reduction techniques are employed resulting in a statistically significant reduction of spam. Thus, the user search experience is improved.

Finally, a novel "self start", patient-specific approach for the treatment of recurrent urinary tract infections is presented. Using conventional data mining techniques, an approach that improves patient care, reduces bacterial mutation, and lowers treatment cost is presented. Thus, an approach that provides better, in terms of patient comfort, quicker, in terms of outbreak duration, and more economical care for female patients who suffer from recurrent urinary tract infections is described.

Ophir Frieder is the Robert L. McDevitt, K.S.G., K.C.H.S. and Catherine H. McDevitt L.C.H.S. Chair in Computer Science and Information Processing and is Chair of the Department of Computer Science at Georgetown University. His research interests focus on scalable information retrieval systems spanning search and retrieval and communications issues. He is a Fellow of the AAAS, ACM, and IEEE.

Mon 26/03/2012
10:00 AM - 11:00 AM
Problems with Visual Analytics: Challenges and Applications,
Prof. Daniel A. Keim, University of Konstanz, Germany

Abstract: Never before in history has data been generated and collected at such high volumes as it is today. As the volumes of data available to business people, scientists, and the public increase, their effective use becomes more challenging. Keeping up to date with the flood of data, using standard tools for data analysis and exploration, is fraught with difficulty. The field of visual analytics seeks to provide people with better and more effective ways to understand and analyze large datasets, while also enabling them to act upon their findings immediately. Visual analytics integrates the analytic capabilities of the computer and the abilities of the human analyst, allowing novel discoveries and empowering individuals to take control of the analytical process. Visual analytics enables unexpected and hidden insights, which may lead to beneficial and profitable innovation. The talk presents the challenges of visual analytics and exemplifies them with application examples, illustrating the exiting potential of current visual analysis techniques.

Daniel A. Keim is full professor and head of the Information Visualization and Data Analysis Research Group in the Computer Science Department of the University of Konstanz, Germany. He has been actively involved in data analysis and information visualization research for about 20 years and developed a number of novel visual analysis techniques for very large data sets. He has been program co-chair of the IEEE InfoVis and IEEE VAST symposia as well as the SIGKDD conference, and he is member of the IEEE InfoVis and VAST steering committees. He is an associate editor of Palgrave's Information Visualization Journal (since 2001) and the Knowledge and Information System Journal (since 2006), and has been an associate editor of the IEEE Transactions on Visualization and Computer Graphics (1999 – 2004) and the IEEE Transactions on Knowledge and Data Engineering (2002 – 2007). He is coordinator of the German Strategic Research Initiative (SPP) on Scalable Visual Analytics and the scientific coordinator of the EU Coordination Action on Visual Analytics.

Dr. Keim got his Ph.D. and habilitation degrees in computer science from the University of Munich. Before joining the University of Konstanz, Dr. Keim was associate professor at the University of Halle, Germany and Technology Consultant at AT&T Shannon Research Labs, NJ, USA.

Wed 14/03/2012
11:00 AM - 12:00 PM
A computer that understands jokes: the next IBM project?,
Prof. Ron Aharoni, Department of Mathematics, Technion

Abstract: תפקידו של ההומור בחיינו הרבה יותר חשוב משנדמה לנו. חלק גדול מן המגע החברתי שלנו נעשה על דרך ההומור, וכל חיינו אנחנו מחפשים סיבות לצחוק. אדוארד דה בונו טוען אפילו שההומור הוא התפקיד הכי חשוב של המוח האנושי.
מצד שני, קשה מאוד להגדיר מהו הומור. במיוחד – מה זו בדיחה? אלפי שנים מנסים להבין זאת, וההצלחה אינה מרובה. בימינו יש להגדרה קולעת מבחן פשוט: האם אפשר לתכנת בעזרתה מחשב שיבין בדיחות?

ההגדרה המקובלת ביותר לבדיחה אומרת שהמנגנון בה הוא חילוף מישורי חשיבה. מדרך חשיבה אחת עוברים לדרך חשיבה אחרת. ההגדרה הזאת בעייתית משתי בחינות – היא כוללת יותר מדי (למשל בתגליות מדעיות יש שינוי דרך חשיבה, והן לא מצחיקות) ופחות מדי (איזה חילוף מישור חשיבה יש בהחלקה על קליפת בננה?)
בהרצאה אציע הגדרה אחרת, עם מנגנון מעט שונה. אספר בה על החלק של ההומור מתוך ספר שלי שיצא לאחרונה, "האדם מנתק משמעות", על מנגנון משותף להומור ולשירה.
מי שרוצה לחשוב על הדברים לפני ההרצאה, מוזמן לנסות למצוא את מנגנון ההומור בבדיחה הבאה:
אירי ממהר לפגישה. שעה שלמה הוא מנסה למצוא מקום חניה, ולבסוף ברוב ייאושו הוא פונה לאלוהים: "אלוהים, אם תעזור לי רק הפעם, אלך לכנסייה כל יום ראשון ואומר את תפילת "ברך את מריה" במשך שנה שלמה". הוא אינו מסיים את דבריו, והנה מופיע לפניו מקום חנייה. "עזוב", הוא אומר לאלוהים, "כבר מצאתי".


Wed 22/02/2012
11:00 AM - 12:00 PM
Network Science - A Network of Sciences,
Prof. Ariel Orda, Department of Electrical Engineering, Technion

Abstract: Network Science is a newly emerging discipline with applications in a variety of domains, such as Communication Networks, Power Grid Networks, Transportation Networks, Social Networks and Biological Networks. Focusing on communication networks, we shall discuss what network science should be and what it should consist of. The talk will also feature some historical anecdotes, tracing back to ancient times.