jueves, 24 de mayo de 2012

Ultima Semana

Lo sucedido con el proyecto.

Lo que nos falto con todo este proyecto, fue mas trabajo en equipo dejar de lado las diferencias que tenemos unos de otros y ser mas comunicativos en los equipos que estaban formados, en mi opinion si hubieramos hablado desde el principio o asistido a reuniones todos los equipo seria otra historia.

Tambien se realizo un cluster con lam e interfaz grafica de manera loca aqui el link en la wiki :

http://elisa.dyndns-web.com/progra/clusterfuncional#Reuni.2BAPM-n_21_de_mayo_2012

Nominaciones:
Pedro Miguel
Juan Carlos
Rafa.

domingo, 20 de mayo de 2012

Petri Net

Carl Petri established in 1962, a mathematical tool for studying communications with controllers. Some of the most important applications of Petri Nets have been in the modeling and analysis of communication protocols, modeling and analysis of manufacturing systems. In this area, have been used to represent production lines, automated assembly lines, automotive production system, flexible manufacturing systems, systems just-in-time, etc..
 

Petri Nets are bipartite graphs consisting of three types of objects:
  • Places.
  • Transitions.
  • Oriented arcs

Petri nets are considered a tool for the study of systems. With your help we can model the behavior and structure of a system, and carry the model boundary conditions, which in a real system are difficult to get or very expensive.
The theory of Petri nets has become recognized as an established methodology in the literature of robotics to model flexible manufacturing systems.
Compared with other models of dynamic behavior charts, diagrams and finite state machines, Petri nets offer a way to express processes that require synchronization. And perhaps most importantly, Petri nets can be analyzed formally and learn the dynamic behavior of the modeled system.
To model a system using mathematical representations making an abstraction of the system, this is achieved with Petri nets, which can also be studied as automata and investigate its mathematical properties. 


Areas of application

  • Data Analysis
  • Software design
  • Reliability
  • Workflow
  • Concurrent programming

Example:

It has a single line to serve 100 clients. The arrival times of customers are successive values ​​of the random variable t, the service times are given by the random variable ts, and N is the number of servers. This model in its initial state the queue is empty and all servers in standby. The Petri net for this scenario is shown in Figure




The states are labeled with capital letters and lowercase transitions. The labels of the sites will also be used as variables whose values ​​are the tokens.

The edges have labels that could represent the transition functions, which specify the number of tokens removed or added when a transition is enabled.
The state A initially has the arrival of 100 customers, the site B prevents clients come more than once, the site is the row Q by customers when they have to wait to be answered. The state S is where the servers idle waiting for the opportunity to work, and site E counts the number of customers leaving the system. The initial state implies that the sites to have the following values:

  • A = 100
  • B = 1
  • Q = 0
  • S = N
  • E = 0 

The transition model is used for customers entering the system and the transition b models to customers when they are being served.
 
Classical Petri nets to model states, events, and conditions, synchronization, and parallelism, among other system features. However, Petri networks that describe the real world tend to be complex and extremely large. Furthermore, conventional Petri nets do not allow the modeling of data and time. To solve this problem, many extensions have been proposed. Extensions can be grouped into three categories: time extensions, color and hierarchies.

TIME EXTENSIONS
An important point in real systems is the description of the temporal behavior of the system. Because Casicas Petri nets are not able to manage time in a `` quantitative'' is added to the model the concept of time. Petri nets with time can be divided, in turn, into two classes: time Petri nets deterministic (or regular Petri nets) and Time Petri Nets Stochastic (or stochastic Petri nets).
The family of Petri nets with deterministic time include Petri nets associated with a given trip time in their transitions, places, or arcs.

The family of Petri nets with time include stochastic Petri nets associated with a trip time stochastic transitions, places and arcs.

Because this type of network associated with a delay time to execute a transition enabled, you can not establish a reachability tree because evolution is not deterministic. The analysis methods associated with such networks are Markov chains, where the probability of the emergence of a new state depends only on the previous state.

EXTENSIONS WITH HERARCHIES
The size problem with Petri nets when modeling real systems, can be treated with the use of hierarchical Petri nets. These networks provide, as its name implies, a hierarchy of subnetworks. A subnet is a set of places, transitions, arcs and even subnets. So that the construction of a large system, is based on a mechanism for structuring two or more processes, represented by subnets. Such that, at one level gives a simple description of the processes, and on another level we want to give a more detailed description of their behavior.
Extensions from hierarchies appeared as extensions to colored Petri nets However, we have developed types of hierarchical Petri nets that are not colored

EXTENSIONS WITH COLOR
Colored Petri nets are an extension of Petri nets that has built a modeling language. Colored Petri nets are considered a modeling language developed for systems in which, communication, synchronization and resource sharing are important. Thus, colored Petri nets combine the advantage of Petri nets and the classical languages ​​of high level programming. To make this statement clear, we will list the features of the graphical elements of such networks.
Bibliography:
http://homepage.cem.itesm.mx/vlopez/redes_de_petri.htm

http://www.mitecnologico.com/Main/RedesDePetri

Scalability

Scalability is the ability of a system, network, or process, to handle growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth. For example, it can refer to the capability of a system to increase total throughput under an increased load when resources (typically hardware) are added. An analogous meaning is implied when the word is used in a commercial context, where scalability of a company implies that the underlying business model offers the potential for economic growth within the company.


 

Scalability, as a property of systems, is generally difficult to define and in any particular case it is necessary to define the specific requirements for scalability on those dimensions that are deemed important. It is a highly significant issue in electronics systems, databases, routers, and networking. A system whose performance improves after adding hardware, proportionally to the capacity added, is said to be a scalable system. An algorithm, design, networking protocol, program, or other system is said to scale, if it is suitably efficient and practical when applied to large situations (e.g. a large input data set or a large number of participating nodes in the case of a distributed system). If the design fails when the quantity increases, it does not scale.

In information technology, scalability (frequently spelled scaleability) seems to have two usages:
1) It is the ability of a computer application or product (hardware or software) to continue to function well when it (or its context) is changed in size or volume in order to meet a user need. Typically, the rescaling is to a larger size or volume. The rescaling can be of the product itself (for example, a line of computer systems of different sizes in terms of storage, RAM, and so forth) or in the scalable object's movement to a new context (for example, a new operating system).

2) It is the ability not only to function well in the rescaled situation, but to actually take full advantage of it. For example, an application program would be scalable if it could be moved from a smaller to a larger operating system and take full advantage of the larger operating system in terms of performance(user response time and so forth) and the larger number of users that could be handled.
 

Bibliography:
http://en.wikipedia.org/wiki/Scalability
http://searchdatacenter.techtarget.com/definition/scalability

jueves, 17 de mayo de 2012

Semana 15

La aportacion en esta semana fue trabajar mas sobre pvm con mis compañero Alejandro, Esteban y jonathan
aqui el link:
http://elisa.dyndns-web.com/progra/SistemasParalelos/XPVM

mis nominacion son para:

Esteban sifuentes
Pedro Miguel

sábado, 12 de mayo de 2012

Grid Computing

Grid computing is an innovative technology that enables a coordinated use of all resources (including computing, storage and specific applications) that are not subject to centralized control. In this sense it is a new form of distributed computing, where resources can be heterogeneous (different architectures, supercomputers, clusters ...) and are connected by wide area networks (for example Internet). Developed in scientific fields in the early 1990s, they entered the commercial market following the idea of ​​so-called Utility computing is a major revolution. The term grid refers to an infrastructure that allows integration and collective use of high performance computers, networks and databases that are owned and managed by different institutions.



Called grid computing system for sharing resources distributed not geographically centered for solving large scale. Shared resources can be computers (PCs, workstations, supercomputers, PDAs, laptops, phones, etc.), software, data and information, special tools (radio, telescopes, etc..) Or people / employees.
Grid computing offers many advantages over alternative technologies. The power that offer a multitude of networked computers using grid is virtually unlimited, as well as providing seamless integration of heterogeneous systems and devices, so that the connections between different machines will not generate any problems. This is a highly scalable, powerful and flexible, thus bypassing problems of lack of resources (bottlenecks) and never becomes obsolete, because of the possibility of modifying the number and characteristics of its components.
These resources are distributed across the network transparently but keeping safety guidelines and policies for managing both technical and economic. So, your goal will be to share a number of online resources in a uniform, secure, transparent, efficient and reliable, providing a single point of access to a set of geographically distributed resources in different management domains. This can lead us to believe that Grid computing enables the creation of virtual enterprises. It is important to know that a grid is a distributed set of machines that help improve the software work on heavy.

Some features of grid computing are:
- Use of work on network architecture and well-defined protocols.
- You can group virtual teams around the world.
- Requires access controls and security.
- Various institutions can pool their resources to get results.
- The Grid teams located less used and assigned tasks.
- Not all problems can be solved with GRID.

Moreover, this technology gives companies the benefit of speed, giving it a competitive advantage, thereby providing improved times for the production of new products and services.

Makes it easy to share, access and manage information through collaboration and operational flexibility, combining technological resources not only diverse, but also people and different skills. Another aspect is that it tends to increase productivity by giving end users access to computing resources, storage and data they need, when needed.



With respect to security in the grid, it is supported with the "intergrids" where that security is the same Lan network offering which is used on grid technology.

The parallel may be seen as a problem, since a parallel machine is very expensive. But if we have availability of a set of heterogeneous machines small or medium size, whose combined computing power is sufficiently large, distributed systems that would generate very low cost, high computing power.

Grid computing needs to maintain its structure, different services such as Internet connections, 24 hours, 365 days, bandwidth, server capacity, security, VPN, firewalls, encryption, secure communications, security policies, ISO and some more features ... Without all these functions and features can not speak of Grid Computing.

Fault tolerance means that if one of the machines that make up the grid collapses, the system recognizes it and the task is forwarded to another machine, which is fulfilled in order to create flexible and robust operational infrastructures.

Applications of Grid Computing

Currently, there are five general applications for Grid Computing:

Super computing.

Applications are those whose needs can not be met in a single node. The needs occur at certain instants of time and resource intensive.

Real-time distributed systems.

These are applications that generate a flow of high-speed data to be analyzed and processed in real time.

Specific services.

This does not take into account the computing power and storage capacity, but the resources that an organization can be considered unnecessary. Grid represents the organization resources.

Data intensive process.

These are applications that make heavy use of storage space. Such applications go beyond the storage capacity of a single node and the data are distributed throughout the grid. In addition to the benefits due to the increase of space, the data distribution along the grid allows access to them in a distributed manner.

Collaborative virtual environments.

Area associated with the concept of tele-immersion, so as to use the vast resources of grid computing and distributed nature to produce distributed 3D virtual environments.

Bibliography:
http://es.wikipedia.org/wiki/Computaci%C3%B3n_grid
http://www.textoscientificos.com/redes/computacion-grid/ventajas-desventajas-aplicaciones

jueves, 10 de mayo de 2012

Semana 14

La aportacion de la semana fue trabajar con mis compañeros alejandro y jonathan con lo de PVM

link:
wiki

Nomino a

Pedro: http://pedrito-martinez.blogspot.mx/2012/05/benchmarks.html

Osvaldo: http://4imedio.blogspot.mx/2012/05/reporte-semana-14-paralelos.html

por estas aportaciones.

Benchmark

In computer terms a benchmark is an application designed to measure the performance of a computer or any element thereof. For this purpose, the machine undergoes a series of workloads or stimuli of different type with the intention of measuring its response to them. This can be estimated under what tasks or stimuli a given computer behaves in a reliable and effective or otherwise shown inefficient. This information is very useful when selecting a machine to perform specific tasks in the post process and creation of audiovisual products, choosing the most appropriate for a given process.


The benchmark is also useful to estimate the level of obsolescence of a system or what technical performance can be improved through updates.
On the other hand, the benchmark can provide us all the technical specifications along with a computer's performance to different stimuli allowing for comparisons between different systems in line with their technical specifications and performance. The comparisons are useful in determining which technical characteristics are ideal for optimal performance in a specific task. A comparison between multiple computers from different manufacturers (with different specifications) allows us to determine a priori which are more suitable for certain applications and which ones are better for others.

Early in the 90's definition requires a supercomputer to produce comparable statistics. After experimenting with various metrics based on the number of processors in 1992, the idea of ​​using a list of production systems as a basis for comparison, at the University of Mannheim.

A year later, Jack Dongarra joins the project with the Linpack benchmark. In May 2003, establishing the first trial based on data published on the Internet:
  • Statistics Series supercomputers at the University of Mannheim (1986-1992)
  • List of sites world's most powerful computer, maintained by Gunter Ahrendt
  • Lots of information gathered by David Kahaner
To measure the power of systems using the HPL benchmark, a portable version of the Linpack benchmark for distributed memory computers.
Note that the list is not based on GRID computing systems or the MDGRAPE-3 supercomputer, which reaches a Petaflop being more powerful than any of the systems included in the list, you can not run benchmarking software used to not be a general-purpose supercomputer.
All lists published since the beginning of the project are posted on the website of the project, so it makes no sense to copy that information elsewhere.

Machines that have occupied the number 1

K_computer Fujitsu (Japan, June 2011 - present)
NUDT Tianhe-1A (China, November 2010 - June 2011)
Cray Jaguar (USA, November 2009 - November 2010)
IBM RoadRunner (USA, June 2008 - November 2009)
IBM Blue Gene / L (U.S., November 2004 - June 2008)
NEC Earth Simulator (Japan, June 2002 - November 2004)

jueves, 3 de mayo de 2012

Supercomputers


A little introduction to supercomputers, with some of history

A supercomputer is the type of computer most powerful and fastest available at this time. How are you machines are designed to process huge amounts of information in a short time and are dedicated to a specific task, its application or use of the individual escapes rather engage in:
1. Search oilfield seismic large databases.
2. The study and prediction of tornadoes.
3. The study and prediction of anywhere in the world.
4. The development of models and projects the creation of aircraft, flight simulators.

We must also add that the supercomputers are a relatively new technology, so their use is not overcrowded and is sensitive to changes. It is for this reason that the price is very high with over 30 million dollars and the number of production per year is small. Supercomputers are the sort of more powerful computers and faster than existing in a given time. Are large, the largest among its peers. Can process huge amounts of information in a short time and can perform millions of instructions per second, are aimed at a specific task and have a very large storage capacity. They are also the most expensive having a costoque can exceed $ 30 million. Because of its high cost are produced very little for a year, there are even some that are made only on request.

They have a special temperature control in order to dissipate the heat that some components can reach. It acts as the arbiter of all applications and controls access to all files, so does the input and output operations. The user is directed to the organization's central computer processing support when required.
They are designed for multiprocessing systems, the CPU is the processing center and can support thousands of users online. The number of processors that can have a supercomputer depends mainly on the model, can have from about 16 to 512 processors (such as the NEC SX-4 1997) and more.


The Manchester Mark I
The first supercomputer British laid the foundation for many concepts still used today.
The digital world's first computer, electronic and program-loaded (top figure) successfully executed its first program on June 21, 1948. This program was written by Tom Kilburn who, along with the late FC (Freddie) Williams designed and built the Manchester Mark I computer This machine, the prototype Mark I, quickly became known as' The Baby Machine 'or just `The Baby'.
In modern terms The Baby had a RAM (random access memory) for only 32 positions or 'words'. Each word consisting of 32 bits (binary digits), which means that the machine had a total of 1024 bits of memory.
The RAM technology was based on cathode ray tube (CRT). CRTs were used to store data bits as charged areas on the phosphor screen, showing as an incandescent series of dots on it. The CRT's electron beam can efficiently handle this load and write a 1 or 0 and then read it as requested. Freddie Williams led the investigation that perfected the use of CRT storage with Tom Kilburn made decisive contributions.

The Cray 1
The Cray 1 was the first supercomputer "modern".
The first Cray-1 in England was located at Daresbury Laboratory for two years before being moved to the University of London.
The first major success in the design of a modern supercomputer was the Cray-1 which was introduced in 1976. One reason why the Cray-1 was so successful was that I could make more than one hundred million arithmetic operations per second (100 MFLOP / s). It was designed by Seymour Cray who left Control Data Corporation in 1970 to create his own company called Cray Research Inc., founded in 1972. If you today, following a conventional process, you will try to find a computer in the same speed using PCs, you would need to connect 200 of them, or you could just buy 33 Sun4s. Cray Reasearch Inc. made at least 16 of its fabulous Cray 1. A typical Cray-1 in 1976 cost over 700,000 dollars. The good thing is that you could instruct the machine in any color deseases, which still remains.

IBM - SP2 computer
The IBM SP2 installed on the Daresbury Laboratory, has, since an upgrade this year, 24 nodes P2SC (Super Chip), plus another 2 processors width oldest node located in two racks. Only the second of which is shown in the picture. Each node has a clock of 120 MHz and 128 Mbytes of memory. Two New High Performance Switch (TV3) are used to connect nodes together. Data storage is 40 GB of fast disk locally attached Ethernet and FDDI networks for user access.
An individual maximum node performance of 480 Mflops offers a total of over 12 Gflops for the entire machine.
A workstation is connected to PowerPC RS/6000 SP2 system for monitoring and management of hardware and software.

Bibliography:

http://www.monografias.com/trabajos65/supercomputadoras/supercomputadoras1.shtml
http://roberto.4mg.com/historia.htm
http://conecti.ca/2011/08/15/infografia-historia-de-las-supercomputadoras/

Semana 13

La aportacion de la semana fue una investigacion sobre el  algoritmo de mineria de datos, peticion de mi compañero avendaño, cabe destacar que se puede aplicar en clustering para hacer redes neuronales y donde me gustaria darle mas importancia a este algoritmo y donde mas se interreso mi compañero avendaño.

 link:
 http://elisa.dyndns-web.com/progra/Algoritmos/Mineriadedatos

mis nominaciones son para:
jonathan alvarado, Alejandor Avendaño y Osvaldo Hinojosa.