Friday, March 29, 2019
Literature review about data warehouse
Literature inspection about selective education storeCHAPTER 2LITERATURE REVIEW2.1 INTRODUCTIONChapter 2 yields literature review about entropy storage storage w atomic number 18house, OLAP MDDB and discipline mine apprehension. We reviewed concept, characteristics, human body and execution asc finis of each above menti sensory facultyd technology to identify a suitable info store frame constitute. This framework will hold water integration of OLAP MDDB and info mining mock up. voice 2.2 discussed about the fundamental of entropy storage store which ac experiences selective randomness storage store elbow roomls and entropy passageing techniques much(prenominal) as extract, transform and loading (ETL) extremityes. A comparative study was dvirtuoso on info storage storage storage w behouse influences introduced by William Inmons (Inmon, 1999), Ralph Kimb all(prenominal) (Kimball, 1996) and Matthias Nicola (Nicola, 2000) to identify suitable form, plan and characteristics. Section 2.3 introduces about OLAP model and computer architecture. We in addition discussed concept of puzzle outing in OLAP ground MDDB, MDDB dodge gravel and implementation. Section 2.4 introduces entropy mining techniques, systems and processes for OLAP mining (OLAM) which is used to mine MDDB. Section 2.5 houses de lineination on literature review especially pointers on our finis to hint a new entropy warehouse model. Since we propose to use Microsoft oerlap to implement the propose model, we similarly discussed a product comparison to loose why Microsoft product is selected. 2.2 selective randomness WAREHOUSE accord to William Inmon, info warehouse is a subject- point, unified, time-variant, and non-volatile collection of info in support of the focuss decision-making process (Inmon, 1999). entropy warehouse is a selective informationbase blocking entropy that unremarkably represents the mintiness history of an compositi on. This historic entropy is used for summary that supports business decisions at m any an(prenominal) levels, from strategic planning to performance evaluation of a discrete organizational unit.It provides an strong integration of useable entropybases into an surround that enables strategic use of information (Zhou, Hull, world power and Franchitti, 1995). These technologies include relative and MDDB management outlines, lymph node/ emcee architecture, meta-selective information modelling and repositories, graphic user interface and much much (Hammer, Garcia-Molina, Labio, Widom, and Z gigantic, 1995 Harinarayan, Rajaraman, and Ullman, 1996). The emergence of cross written report domain such as association management in finance, health and e-commerce remove go upd that vast amount of selective information need to be conk outd. The evolution of selective information in selective information warehouse evoke provide multiple selective informationset dimensions to solve diverse problems. Thus, critical decision making process of this dataset needs suitable data warehouse model (Barquin and Edelstein, 1996).The main proponents of data warehouse are William Inmon (Inmon, 1999) and Ralph Kimball (Kimball, 1996). But they excite divergent perspectives on data warehouse in term of design and architecture. Inmon (Inmon, 1999) defined data warehouse as a dependent data market place expression era Kimball (Kimball, 1996) defined data warehouse as a bus establish data mart structure. Table 2.1 discussed the differences in data warehouse structure mingled with William Inmon and Ralph Kimball.A data warehouse is a read- lone(prenominal) data witnesser where end-users are not allowed to change the assesss or data elements. Inmons (Inmon, 1999) data warehouse architecture strategy is different from Kimballs (Kimball, 1996). Inmons data warehouse model splits data marts as a copy and distri aloneed as an interface between data warehouse and end users. Kimballs views data warehouse as a unions of data marts. The data warehouse is the collections of data marts combine into one commutation repository. Figure 2.1 illustrates the differences between Inmons and Kimballs data warehouse architecture adopted from (Mailvaganam, 2007).Although Inmon and Kimball surrender a different design view of data warehouse, they do keep on masteryful implementation of data warehouse that depends on an efficacious collection of operational data and validation of data mart. The role of database represent and ETL processes on data are inevitable components in both researchers data warehouse design. Both believed that dependant data warehouse architecture is required to fulfil the requirement of green light end users in term of preciseness, measure and data relevancy2.2.1 DATA WAREHOUSE ARCHITECTUREAlthough data warehouse architecture clear considerable research scope, and it endure be viewed in many perspectives. (Thilini and Hugh, 2005 ) and (Eckerson, 2003) provide approximately meaningful way to view and analyse data warehouse architecture. Eckerson states that a successful data warehouse system depends on database theatrical production process which derives data from different integrated Online Transactional touch (OLTP) system. In this case, ETL process plays a crucial role to make database scaffolding process workable. Survey on factors that influenced selection on data warehouse architecture by (Thilini, 2005) indentifies five data warehouse architecture that are common in use as shown in Table 2.2mugwump selective information MartsIndependent data marts also known as locate or small scale data warehouse. It is mainly used by departments, divisions of company to provide individual operational databases. This typeface of data mart is simple yet populates of different form that was derived from multiple design structures from various inconsistent database designs. Thus, it complicates cross data mart ana lysis. Since every organizational units melt to build their own database which operates as independent data mart (Thilini and Hugh, 2005) cited the work of (Winsberg, 1996) and (Hoss, 2002), it is best used as an ad-hoc data warehouse and also to be use as a prototype forward building a real data warehouse.Data Mart Bus Architecture(Kimball, 1996) pioneered the design and architecture of data warehouse with unions of data marts which are known as the bus architecture or virtual data warehouse. Bus architecture allows data marts not only located in one host merely it can be also being located on different server. This allows the data warehouse to endures more(prenominal) in virtual mode and combined all data marts and process as one data warehouse.Hub-and-spoke architecture(Inmon, 1999) developed hub and spoke architecture. The hub is the central server fetching care of information exchange and the spoke do by data variety for all regional operation data stores. Hub and spok e mainly rivet on building a scalable and maintainable infrastructure for data warehouse.Centralized Data Warehouse ArchitectureCentral data warehouse architecture build establish on hub-and-spoke architecture but without the dependent data mart component. This architecture copies and stores heterogeneous operational and impertinent data to a bingle and consistent data warehouse. This architecture has only one data model which are consistent and complete from all data character references. According to (Inmon, 1999) and (Kimball, 1996), central data warehouse should consist of database staging or known as operational data store as an intermediate stage for operational processing of data integration before transform into the data warehouse.Federated ArchitectureAccording to (Hackney, 2000), federated data warehouse is an integration of multiple heterogeneous data marts, database staging or operational data store, combination of analytical application and reporting systems. The co ncept of federated focus on integrated framework to make data warehouse more reliable. (Jindal, 2004) conclude that federated data warehouse are a practical approach as it focus on higher(prenominal)(prenominal) reliability and provide excellent value.(Thilini and Hugh, 2005) conclude that hub and spoke and centralized data warehouse architectures are similar. Hub and spoke is faster and easier to implement because no data mart are required. For centralized data warehouse architecture scored higher than hub and spoke as for urgency needs for relatively fast implementation approach.In this work, it is very of the essence(predicate) to identify which data warehouse architecture that is robust and scalable in terms of building and deploying enterprise wide systems. (Laney, 2000), states that selection of appropriate data warehouse architecture must curb successful characteristic of various data warehouse model. It is evident that 2 data warehouse architecture prove to be favourite as shown by (Thilini and Hugh, 2005), (Eckerson, 2003) and (Mailvaganam, 2007). First hub-and-spoke proposed by (Inmon, 1999) as it is a data warehouse with dependant data marts and secondly is the data mart bus architecture with dimensional data marts proposed by (Kimball, 1996). The selection of the new proposed model will use hub-and-spoke data warehouse architecture which can be used for MDDB modelling.2.2.2 DATA WAREHOUSE EXTRACT, TRANSFORM, LOADINGData warehouse architecture process begins with ETL process to interpret the data passes the quality threshold. According to Evin (2001), it is essential to wipe out right dataset. ETL are an important component in data warehouse environment to envision dataset in the data warehouse are cleansed from various OLTP systems. ETLs are also responsible for tiltning schedule tasks that extract data from OLTP systems. Typically, a data warehouse is populated with historical information from within a particular organization (Bunger, Col by, Cole, McKenna, Mulagund, and Wilhite, 2001). The complete process descriptions of ETL are discussed in table 2.3.Data warehouse database can be populated with a wide variety of data sources from different local anaestheticisation principles, olibanum collecting all the different dataset and storing it in one central fixture is an extremely challenging task (Calvanese, Giacomo, Lenzerini, Nardi, and Rosati, , 2001). However, ETL processes eliminate the difficultity of data nation via simplified process as depicts in figure 2.2. The ETL process begins with data extract from operational databases where data cleansing and scrubbing are done, to ensure all datas are clear. Then it is transform to meet the data warehouse standards before it is loaded into data warehouse.(Zhou et al, 1995) states that during data integration process in data warehouse, ETL can assist in import and export of operational data between heterogeneous data sources using Object linking and embedding dat abase (OLE-DB) based architecture where the data are transform to populate all validated data into data warehouse.In (Kimball, 1996) data warehouse architecture as show in figure 2.3 focuses on three important modules, which is the binding room presentation server and the front room. ETL processes is implemented in the grit room process, where the data staging work in charge of concourse all source systems operational databases to perform stock of data from source systems from different file format from different systems and platforms. The second step is to run the transformation process to ensure all inconsistency is removed to ensure data integrity. Finally, it is loaded into data marts. The ETL processes are commonly executed from a job control via scheduling task. The presentation server is the data warehouse where data marts are stored and process here. Data stored in star outline consist of dimension and fact tables. This is where data are then(prenominal) process of i n the front room where it is access by wonder services such as reporting tools, desk upside tools, OLAP and data mining tools.Although ETL processes prove to be an essential component to ensure data integrity in data warehouse, the issue of complexity and scalability plays important role in decision making types of data warehouse architecture. One way to achieve a scalable, non-complex upshot is to adopt a hub-and-spoke architecture for the ETL process. According to Evin (2001), ETL best operates in hub-and-spoke architecture because of its flexibility and efficiency. Centralized data warehouse design can influence the care of full access control of ETL processes.ETL processes in hub and spoke data warehouse architecture is recommended in (Inmon, 1999) and (Kimball, 1996). The hub is the data warehouse after processing data from operational database to staging database and the spoke(s) are the data marts for distributing data. Sherman, R (2005) state that hub-and-spoke approac h uses one-to-many interfaces from data warehouse to many data marts. One-to-many are simpler to implement, cost effective in a bulky run and ensure consistent dimensions. Compared to many-to-many approach it is more compound and costly.2.2.3 DATA WAREHOUSE FAILURE AND SUCCESS FACTORSBuilding a data warehouse is indeed a challenging task as data warehouse run across inheriting a unique characteristics that may influence the overall reliability and robustness of data warehouse. These factors can be applied during the analysis, design and implementation phases which will ensure a successful data warehouse system. Section 2.2.3.1 focus on factors that influence data warehouse check failure. Section 2.2.3.2 discusses on the success factors which implementing the correct model to support a successful data warehouse project.2.2.3.1 DATA WAREHOUSE FAILURE FACTORS(Hayen, Rutashobya, and Vetter, 2007) studies shows that implementing a data warehouse project is costly and risky as a da ta warehouse project can cost over $1 million in the first year. It is estimated that two-thirds of the effort of setting up the data warehouse projects attempt will fail eventually. (Hayen et al, 2007) cited on the work of (Briggs, 2002) and (Vassiliadis, 2004) observe three factors for the failure of data warehouse project which is environment, project and practiced factors as shown in table 2.4.Environment leads to organization changes in term of business, politics, mergers, takeovers and lack of top management support. These include human wrongdoing, corporate culture, decision making process and worthless change management (Watson, 2004) (Hayen et al, 2007). Poor technological knowledge on the requirements of data definitions and data quality from different organization units may cause data warehouse failure. Incompetent and insufficient knowledge on data integration, poor selection on data warehouse model and data warehouse analysis applications may cause huge failure.In spite of heavy investment on hardware, software and people, poor project management factors may lead data warehouse project failure. For example, assigning a project manager that lacks of knowledge and project set about in data warehouse, may cause impediment of quantifying the return on investment (ROI) and achievement of project triple constraint (cost, scope, time).Data self-control and accessibility is a potential factor that may cause data warehouse project failure. This is considered vulnerable issue within the organization that one must not share or acquire someone else data as this considered losing authority on the data (Vassiliadis, 2004). Thus, it emphasis restriction on any departments to declare total ownership of pure clean and error free data that might cause potential problem on ownership of data rights.2.2.3.2 DATA WAREHOUSE SUCCESS FACTORS(Hwang M.I., 2007) stress that data warehouse implementations are an important area of research and industrial practices but o nly few researches made an assessment in the critical success factors for data warehouse implementations. He conducted a survey on sextette data warehouse researchers (Watson Haley, 1997 Chen et al., 2000 Wixom Watson, 2001 Watson et al., 2001 Hwang Cappel, 2002 Shin, 2003) on the success factors in a data warehouse project. He concluded his survey with a arguing of successful factors which influenced data warehouse implementation as depicted in figure 2.8. He shows eight implementation factors which will directly fix the six selected success variablesThe above mentioned data warehouse success factors provide an important guideline for implementing a successful data warehouse projects. (Hwang M.I., 2007) studies shows an integrated selection of various factors such as end user participation, top management support, acquisition of quality source data with profound and clear business needs plays crucial role in data warehouse implementation. Beside that, other factors that was h ighlighted by Hayen R.L. (2007) cited on the work of Briggs (2002) and Vassiliadis (2004), Watson (2004) such as project, environment and technical knowledge also influenced data warehouse implementation.SummaryIn this work on the new proposed model, hub-and-spoke architecture is use as Central repository service, as many scholars including Inmon, Kimball, Evin, Sherman and Nicola adopt to this data warehouse architecture. This approach allows locating the hub (data warehouse) and spokes (data marts) centrally and can be distributed across local or wide area network depending on business requirement. In designing the new proposed model, the hub-and-spoke architecture clearly identifies six important data warehouse components that a data warehouse should have, which includes ETL, Staging Database or operational database store, Data marts, MDDB, OLAP and data mining end users applications such as Data interrogative, reporting, analysis, statistical tools. However, this process may di ffer from organization to organization. Depending on the ETL setup, some data warehouse may overwrite old data with new data and in some data warehouse may only maintain history and audit trial of all changes of the data.2.3 ONLINE uninflected PROCESSINGOLAP Council (1997) define OLAP as a group of decision support system that facilitate fast, consistent and interactive access of information that has been reformulate, transformed and summarized from relational dataset mainly from data warehouse into MDDB which allow optimal data retrieval and for performing trend analysis.According to Chaudhuri (1997), Burdick, D. et al. (2006) and Vassiladis, P. (1999), OLAP is important concept for strategic database analysis. OLAP have the ability to analyze large amount of data for the extraction of valuable information. Analytical development can be of business, education or medical sectors. The technologies of data warehouse, OLAP, and analyzing tools support that ability. OLAP enable discove ring pattern and relationship contain in business action by query tons of data from multiple database source systems at one time (Nigel. P., 2008). Processing database information using OLAP required an OLAP server to organize and transformed and builds MDDB. MDDB are then correctd by stoppages for client OLAP tools to perform data analysis which aim to discover new pattern relationship between the pulley-blocks. Some popular OLAP server software programs include Oracle (C), IBM (C) and Microsoft (C).Madeira (2003) supports the fact that OLAP and data warehouse are complementary technology which blends together. Data warehouse stores and manages data eon OLAP transforms data warehouse datasets into strategic information. OLAP function honks from basic navigation and browse (often known as slice and dice), to calculations and also serious analysis such as time series and complex modelling. As decision-makers implement more advanced OLAP capabilities, they move from basic data a ccess to universe of information and to discovering of new knowledge.2.3.4 OLAP ARCHITECTUREIn comparison to data warehouse which usually based on relational technology, OLAP uses a third-dimensional view to come data to provide rapid access to strategic information for analysis. there are three type of OLAP architecture based on the method in which they store multi-dimensional data and perform analysis operations on that dataset (Nigel, P., 2008). The categories are flat OLAP (MOLAP), relational OLAP (ROLAP) and hybrid OLAP (HOLAP). In MOLAP as depicted in plat 2.11, datasets are stored and summarized in a multidimensional cube. The MOLAP architecture can perform faster than ROLAP and HOLAP (C). MOLAP cubes designed and build for rapid data retrieval to enhance efficient cut and dicing operations. MOLAP can perform complex calculations which have been pre-generated after cube creation. MOLAP processing is restricted to initial cube that was created and are not bound to any ad ditional replication of cube.In ROLAP as depict in Diagram 2.12, data and aggregations are stored in relational database tables to provide the OLAP slicing and dicing functionalities. ROLAP are the slowest among the OLAP flavours. ROLAP relies on data manipulating directly in the relational database to give the manifestation of conventional OLAPs slicing and dicing functionality. Basically, each slicing and dicing action is equivalent to adding a WHERE clause in the SQL statement. (C)ROLAP can manage large amounts of data and ROLAP do not have any limitations for data size. ROLAP can influence the intrinsic functionality in a relational database. ROLAP are slow in performance because each ROLAP activity are essentially a SQL query or multiple SQL queries in the relational database. The query time and number of SQL statements executed measures by its complexity of the SQL statements and can be a bottleneck if the underlying dataset size is large. ROLAP essentially depends on SQL sta tements generation to query the relational database and do not add all needs which make ROLAP technology conventionally limited by what SQL functionality can offer. (C)HOLAP as depict in Diagram 2.13, combine the technologies of MOLAP and ROLAP. Data are stored in ROLAP relational database tables and the aggregations are stored in MOLAP cube. HOLAP can drill down from multidimensional cube into the underlying relational database data. To acquire thick type of information, HOLAP leverages cube technology for faster performance. Whereas to believe detail type of information, HOLAP can drill down from the cube into the underlying relational data. (C)In OLAP architectures (MOLAP, ROLAP and HOLAP), the datasets are stored in a multidimensional format as it involves the creation of multidimensional blocks called data cubes (Harinarayan, 1996). The cube in OLAP architecture may have three axes (dimensions), or more. apiece axis (dimension) represents a logical grade of data. One axis may for example represent the geographic location of the data, while others may indicate a state of time or a specific school. each(prenominal) of the categories, which will be described in the following section, can be broken down into successive levels and it is assertable to drill up or down between the levels.Cabibo (1997) states that OLAP partitions are commonly stored in an OLAP server, with the relational database frequently stored on a separate server from OLAP server. OLAP server must query across the network whenever it needs to access the relational tables to resolve a query. The impact of querying across the network depends on the performance characteristics of the network itself. Even when the relational database is placed on the aforesaid(prenominal) server as OLAP server, inter-process calls and the associated context switching are required to retrieve relational data. With a OLAP partition, calls to the relational database, whether local or over the network, do not occur during querying.2.3.3 OLAP FUNCTIONALITYOLAP functionality offers dynamic multidimensional analysis supporting end users with analytical activities includes calculations and modelling applied across dimensions, trend analysis over time periods, slicing subsets for on-screen viewing, cut to deeper levels of records (OLAP Council, 1997) OLAP is implemented in a multi-user client/server environment and provide reliably fast response to queries, in spite of database size and complexity. OLAP facilitate the end user integrate enterprise information through relative, customized viewing, analysis of historical and present data in various what-if data model scenario. This is achieved through use of an OLAP Server as depicted in diagram 2.9.OLAP functionality is provided by an OLAP server. OLAP server design and data structure are optimized for fast information retrieval in any course and flexible calculation and transformation of unprocessed data. The OLAP server may either actua lly carry out the processed multidimensional information to distribute consistent and fast response times to end users, or it may fill its data structures in real time from relational databases, or offer a choice of both.Essentially, OLAP create information in cube form which allows more composite analysis compares to relational database. OLAP analysis techniques employ slice and dice and drill methods to segregate data into lade of information depending on given parameters. Slice is identifying a single value for one or more variable which is non-subset of multidimensional array. Whereas dice function is application of slice function on more than two dimensions of multidimensional cubes. Drilling function allows end user to traverse between condensed data to most precise data unit as depict in Diagram 2.10.2.3.5 MULTIDIMENSIONAL DATABASE SCHEMAThe base of every data warehouse system is a relational database build using a dimensional model. Dimensional model consists of fact and d imension tables which are described as star dodge or snowflake strategy (Kimball, 1999). A schema is a collection of database objects, tables, views and indexes (Inmon, 1996). To understand dimensional data modelling, Table 2.10 defines some of the terms commonly used in this type of modellingIn designing data models for data warehouse, the most commonly used schema types are star schema and snowflake schema. In the star schema design, fact table sits in the middle and is connected to other environ dimension tables like a star. A star schema can be simple or complex. A simple star consists of one fact table a complex star can have more than one fact table.Most data warehouses use a star schema to represent the multidimensional data model. The database consists of a single fact table and a single table for each dimension. Each tuple in the fact table consists of a pointer or extraneous key to each of the dimensions that provide its multidimensional coordinates, and stores the numer ic measures for those coordinates. A tuple consist of a unit of data extracted from cube in a range of member from one or more dimension tables. (C, http//msdn.microsoft.com/en-us/library/aa216769%28SQL.80%29.aspx). Each dimension table consists of columns that correspond to attributes of the dimension. Diagram 2.14 shows an example of a star schema For Medical Informatics System.Star schemas do not explicitly provide support for attribute hierarchies which are not suitable for architecture such as MOLAP which require lots of hierarchies of dimension tables for efficient drilling of datasets. Snowflake schemas provide a refinement of star schemas where the dimensional hierarchy is explicitly represented by normalizing the dimension tables, as shown in Diagram 2.15. The main advantage of the snowflake schema is the improvement in query performance due to minimized disk storage requirements and joining littler lookup tables. The main disadvantage of the snowflake schema is the additi onal maintenance efforts needed due to the increase number of lookup tables. (C)Levene. M (2003) stresses that in addition to the fact and dimension tables, data warehouses store selected summary tables containing pre-aggregated data. In the simplest cases, the pre-aggregated data corresponds to aggregating the fact table on one or more selected dimensions. Such pre-aggregated summary data can be represented in the database in at least two ways. Whether to use star or a snowflake mainly depends on business needs. 2.3.2 OLAP EvaluationAs OLAP technology taking prominent place in data warehouse industry, there should be a suitable assessment tool to measure out it. E.F. Codd not only invented OLAP but also provided a set of procedures which are known as the Twelve Rules for OLAP product ability assessment which include data manipulation, unlimited dimensions and aggregation levels and flexible reporting as shown in Table 2.8 (Codd, 1993)Codd twelve rules of OLAP provide us an essenti al tool to swear the OLAP functions and OLAP models used are able to produce desired result. Berson, A. (2001) stressed that a good OLAP system should also support a complete database management tools as a utility for integrated centralized tool to take into account database management to perform distribution of databases within the enterprise. OLAP ability to perform drilling mechanism within the MDDB allows the functionality of drill down right to the source or root of the detail record level. This implies that OLAP tool permit a down changeover from the MDDB to the detail record level of the source relational database. OLAP systems also must support incremental database refreshes. This is an important feature as to retard stability issues on operations and usability problems when the size of the database increases.2.3.1 OLTP and OLAPThe design of OLAP for multidimensional cube is entirely different compare to OLTP for database. OLTP is implemented into relational database to support chance(a) processing in an organization. OLTP system main function is to capture data into computers. OLTP allow effective data manipulation and storage of data for effortless operational resulting in huge quantity of transactional data. Organisations build multiple OLTP systems to handle huge quantities of daily operations transactional data can in shortly period of time.OLAP is designed for data access and analysis to support managerial user strategic decision making process. OLAP technology focuses on aggregating datasets into multidimensional view without hindering the system performance. According to Han, J. (2001), states OLTP systems as Customer oriented and OLAP is a market oriented. He summarized major differences between OLTP and OLAP system based on 17 key criteria as shown in table 2.7.It is complicated to merge OLAP and OLTP into one centralized database system. The dimensional data design model used in OLAP is much more effective for querying than the relati onal database query used in OLTP system. OLAP may use one central database as data source and OLTP used different data source from different database sites. The dimensional design of OLAP is not suitable for OLTP system, mainly due to pleonasm and the loss of referential integrity of the data. Organization chooses to have two separate information systems, one OLTP and one OLAP system (Poe, V., 1997).We can conclude that the declare oneself of OLTP systems is to get data into computers, whereas the purpose of OLAP is to get data or information out of computers. 2.4 DATA MININGMany data mining scholars (Fayyad, 1998 Freitas, 2002 Han, J. et. al., 1996 Frawley, 1992) have defined data mining as discovering hidden patterns from historical datasets by using pattern recognition as it involves searching for specific, unknown information in a database. Chung, H. (1999) and Fayyad et al (1996) referred data mining as a step of knowledge discovery in database and it is the process of analyz ing data and extracts knowledge from a large database also known as data warehouse (Han, J., 2000) and making it into useful information.Freitas (2002) and Fayyad (1996) have recognized the advantageous tool of data mining for extracting knowledge from a da
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment