Question 1. Which Components Are Used For Stream Flow Of Data?
For streaming of statistics glide, three components are used:
Bolt :- Bolts represent the processing good judgment unit in Storm. One can utilize bolts to do any type of processing which include filtering, aggregating, joining, interacting with records shops, speaking to outside structures and many others. Bolts also can emit tuples (statistics messages) for the subsequent bolts to process. Additionally, bolts are responsible to well known the processing of tuples after they're accomplished processing.
Spout :- Spouts constitute the supply of records in Storm. You can write spouts to read statistics from information resources together with database, allotted record structures, messaging frameworks and so forth. Spouts can widely be categorised into following –
Reliable – These spouts have the capability to replay the tuples (a unit of records in statistics circulation). This allows packages obtain ‘at the least as soon as message processing’ semantic as in case of screw ups, tuples can be replayed and processed again. Spouts for fetching the statistics from messaging frameworks are normally reliable as these frameworks provide the mechanism to replay the messages.
Unreliable – These spouts don’t have the capability to replay the tuples. Once a tuple is emitted, it can't be replayed regardless of whether or not it become processed successfully or now not. This type of spouts follow ‘at maximum as soon as message processing’ semantic.
Tuple :- The tuple is the primary facts structure in Storm. A tuple is a named list of values, wherein each value may be any type. Tuples are dynamically typed — the sorts of the fields do now not want to be declared. Tuples have helper strategies like getInteger and getString to get discipline values without having to forged the end result. Storm needs to understand the way to serialize all the values in a tuple. By default, Storm is aware of a way to serialize the primitive types, strings, and byte arrays. If you want to use another type, you’ll need to put in force and check in a serializer for that kind.
Question 2. What Are The Key Benefits Of Using Storm For Real Time Processing?
Easy to perform : Operating hurricane is quiet clean.
Real fast : It can process 100 messages in keeping with second per node.
Fault Tolerant : It detects the fault automatically and re-begins the practical attributes.
Reliable : It guarantees that each unit of statistics could be carried out as a minimum as soon as or exactly once.
Scalable : It runs across a cluster of device
Apache Tapestry Interview Questions
Question 3. Does Apache Act As A Proxy Server?
Yes, It acts as proxy additionally by the use of the mod_proxy module. This module implements a proxy, gateway or cache for Apache. It implements proxying capability for AJP13 (Apache JServ Protocol version 1.Three), FTP, CONNECT (for SSL),HTTP/0.Nine, HTTP/1.Zero, and (for the reason that Apache 1.Three.23) HTTP/1.1. The module may be configured to hook up with different proxy modules for those and other protocols.
Question four. What Is The Use Of Zookeeper In Storm?
Storm makes use of Zookeeper for coordinating the cluster. Zookeeper is not used for message passing, so the load that Storm locations on Zookeeper is pretty low. Single node Zookeeper clusters must be sufficient for maximum instances, however if you want failover or are deploying massive Storm clusters you may need larger Zookeeper clusters. Instructions for deploying Zookeeper are here.
A few notes about Zookeeper deployment :
It’s crucial that you run Zookeeper beneath supervision, seeing that Zookeeper is fail-speedy and could go out the technique if it encounters any blunders case. See right here for extra info.
It’s crucial which you installation a cron to compact Zookeeper’s facts and transaction logs. The Zookeeper daemon does not try this on its very own, and in case you don’t installation a cron, Zookeeper will fast run out of disk space.
Apache Tapestry Tutorial
Question five. What Is Zeromq?
ZeroMQ is “a library which extends the usual socket interfaces with functions historically furnished by means of specialised messaging middleware products”. Storm relies on ZeroMQ generally for venture-to-assignment communication in walking Storm topologies.
Apache Cassandra Interview Questions
Question 6. How Many Distinct Layers Are Of Storm’s Codebase?
There are 3 awesome layers to Storm’s codebase:
First : Storm changed into designed from the very starting to be like minded with more than one languages. Nimbus is a Thrift provider and topologies are defined as Thrift structures. The usage of Thrift lets in Storm for use from any language.
Second : all of Storm’s interfaces are particular as Java interfaces. So even though there’s numerous Clojure in Storm’s implementation, all utilization should go through the Java API. This manner that every characteristic of Storm is usually to be had through Java.
Third : Storm’s implementation is essentially in Clojure. Line-wise, Storm is ready 1/2 Java code, half of Clojure code. But Clojure is an awful lot extra expressive, so in truth the excellent majority of the implementation good judgment is in Clojure.
Question 7. What Does It Mean For A Message To Be?
A tuple coming off a spout can trigger lots of tuples to be created based totally on it. Consider.
the streaming word depend topology:TopologyBuilder builder = new TopologyBuilder();
builder.SetSpout("sentences", new KestrelSpout("kestrel.Backtype.Com",
builder.SetBolt("cut up", new SplitSentence(), 10)
builder.SetBolt("depend", new WordCount(), 20)
.FieldsGrouping("cut up", new Fields("word"));
This topology reads sentences off a Kestrel queue, splits the sentences into its constituent words, and then emits for each phrase the variety of times it has visible that word earlier than. A tuple coming off the spout triggers many tuples being created based totally on it: a tuple for each word inside the sentence and a tuple for the updated depend for each word.
Storm considers a tuple coming off a spout “completely processed” whilst the tuple tree has been exhausted and each message within the tree has been processed. A tuple is taken into consideration failed when its tree of messages fails to be completely processed inside a specified timeout. This timeout can be configured on a topology-specific basis the usage of the Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS configuration and defaults to 30 seconds.
Apache Cassandra Tutorial Apache Spark Interview Questions
Question eight. When Do You Call The Cleanup Method?
The cleanup technique is referred to as while a Bolt is being shutdown and have to cleanup any resources that have been opened. There’s no guarantee that this technique will be referred to as at the cluster: For example, if the machine the undertaking is strolling on blows up, there’s no manner to invoke the method.
The cleanup technique is supposed while you run topologies in neighborhood mode (where a Storm cluster is simulated in manner), and you want with a purpose to run and kill many topologies without suffering any aid leaks.
Question nine. How Can We Kill A Topology?
To kill a topology, really run:
typhoon kill stormname
Give the same call to storm kill as you used when submitting the topology.
Storm won’t kill the topology without delay. Instead, it deactivates all of the spouts so they don’t emit any extra tuples, and then Storm waits Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS seconds before destroying all the workers. This gives the topology enough time to finish any tuples it turned into processing whilst it were given killed.
Apache Solr Interview Questions
Question 10. What Is Combineraggregator?
A CombinerAggregator is used to mix a set of tuples into a single area. It has the following signature:
public interface CombinerAggregator
T init (TridentTuple tuple);
T integrate(T val1, T val2);
Storm calls the init() approach with each tuple, after which time and again calls the integrate()approach until the partition is processed. The values handed into the combine() approach are partial aggregations, the result of mixing the values returned through calls to init().
Apache Solr Tutorial
Question eleven. What Are The Common Configurations In Apache Storm?
There are a selection of configurations you can set in keeping with topology. A list of all the configurations you may set may be determined right here. The ones prefixed with “TOPOLOGY” can be overridden on a topology-particular foundation (the opposite ones are cluster configurations and cannot be overridden).
Here are a few not unusual ones which are set for a topology:
1 Config.TOPOLOGY_WORKERS : This units the number of worker strategies to apply to execute the topology. For example, in case you set this to twenty-five, there may be 25 Java strategies across the cluster executing all the responsibilities. If you had a combined one hundred fifty parallelism across all additives in the topology, each worker process can have 6 duties strolling within it as threads.
2 Config.TOPOLOGY_ACKER_EXECUTORS : This units the wide variety of executors in order to tune tuple trees and discover while a spout tuple has been completely processed By not setting this variable or setting it as null, Storm will set the range of acker executors to be equal to the number of workers configured for this topology. If this variable is about to 0, then Storm will right away ack tuples as soon as they arrive off the spout, efficiently disabling reliability.
Three Config.TOPOLOGY_MAX_SPOUT_PENDING : This units the maximum wide variety of spout tuples that may be pending on a single spout venture right now (pending means the tuple has no longer been acked or failed but). It is especially recommended you put this config to prevent queue explosion.
4 Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS : This is the most amount of time a spout tuple has to be completely completed before it's far taken into consideration failed. This price defaults to 30 seconds, that is enough for maximum topologies.
5 Config.TOPOLOGY_SERIALIZATIONS : You can register extra serializers to Storm using this config so you can use custom types within tuples.
Apache Hive Interview Questions
Question 12. Is It Necessary To Kill The Topology While Updating The Running Topology?
Yes, to replace a going for walks topology, the handiest choice presently is to kill the current topology and resubmit a brand new one. A deliberate function is to implement a Storm change command that swaps a going for walks topology with a brand new one, ensuring minimum downtime and no chance of each topologies processing tuples at the equal time.
Apache Tapestry Interview Questions
Question 13. How Storm Ui Can Be Used In Topology?
Storm UI is utilized in monitoring the topology. The Storm UI affords facts approximately mistakes taking place in responsibilities and fine-grained stats on the throughput and latency performance of each issue of each walking topology.
Apache Storm Tutorial
Question 14. Why Does Not Apache Include Ssl?
SSL (Secure Socket Layer) information shipping requires encryption, and lots of governments have regulations upon the import, export, and use of encryption era. If Apache protected SSL inside the base bundle, its distribution could involve all kinds of criminal and bureaucratic problems, and it'd not be freely to be had. Also, some of the technology required to talk to cutting-edge customers the use of SSL is patented by RSA Data Security, who restricts its use without a license.
Question 15. Does Apache Include Any Sort Of Database Integration?
Apache is a Web (HTTP) server, not an utility server. The base package deal does now not consist of the sort of functionality. PHP challenge and the mod_perl task permit you to paintings with databases from in the Apache surroundings.
Apache Pig Interview Questions
Question sixteen. While Installing, Why Does Apache Have Three Config Files – Srm.Conf, Access.Conf And Httpd.Conf?
The first two are remnants from the NCSA instances, and typically you must be excellent in case you delete the first , and stay with httpd.Conf.
Srm.Conf :- This is the default document for the ResourceConfig directive in httpd.Conf. It is processed after httpd.Conf but before access.Conf.
Get right of entry to.Conf :- This is the default record for the AccessConfig directive in httpd.Conf.It is processed after httpd.Conf and srm.Conf.
Httpd.Conf :-The httpd.Conf record is nicely-commented and often self-explanatory.
Apache Hive Tutorial
Question 17. How To Check For The Httpd.Conf Consistency And Any Errors In It?
We can test syntax for httpd configuration document through the usage of following command.
This command will dump out a description of the way Apache parsed the configuration report. Careful examination of the IP addresses and server names may help uncover configuration errors.
Apache Flume Interview Questions
Question 18. Explain When To Use Field Grouping In Storm? Is There Any Time-out Or Limit To Known Field Values?
Field grouping in typhoon uses a mod hash function to decide which project to send a tuple, making sure which venture might be processed in the right order. For that, you don’t require any cache. So, there may be no time-out or restriction to regarded area values.
The stream is partitioned by way of the fields specific in the grouping. For example, if the move is grouped via the “user-id” area, tuples with the same “consumer-id” will continually visit the identical venture, but tuples with distinctive “person-id”‘s may fit to specific responsibilities.
Apache Cassandra Interview Questions
Question 19. What Is Mod_vhost_alias?
This module creates dynamically configured digital hosts, with the aid of allowing the IP address and/or the Host: header of the HTTP request to be used as a part of the path name to determine what documents to serve. This permits for clean use of a huge quantity of digital hosts with comparable configurations.
Apache Pig Tutorial
Question 20. Tell Me Is Running Apache As A Root Is A Security Risk?
No. Root manner opens port 80, however never listens to it, so no person will truely input the web page with root rights. If you kill the foundation method, you may see the other roots disappear as properly.
Apache Kafka Interview Questions
Question 21. What Is Multiviews?
MultiViews search is enabled through the MultiViews Options. It is the overall name given to the Apache server’s ability to offer language-unique report variations in response to a request. This is documented quite thoroughly inside the content negotiation description page. In addition, Apache Week carried a piece of writing in this situation entitled It then chooses the nice fit to the consumer’s requirements, and returns that file.
Question 22. Does Apache Include A Search Engine?
Yes, Apache contains a Search engine. You can seek a record name in Apache by using the usage of the “Search name”.
Apache Flume Tutorial
Question 23. Explain How You Can Streamline Log Files Using Apache Storm?
To study from the log documents, you may configure your spout and emit in line with line as it examine the log. The output then can be assign to a bolt for analyzing.
Apache Ant Interview Questions
Question 24. Mention How Storm Application Can Be Beneficial In Financial Services?
In economic offerings, Storm may be beneficial in stopping
Securities fraud :
Perform real-time anomaly detection on known styles of sports and use discovered patterns from previous modeling and simulations.
Correlate transaction records with other streams (chat, e-mail, etc.) in a value-effective parallel processing surroundings.
Reduce query time from hours to mins on huge volumes of records.
Build a unmarried platform for operational packages and analytics that reduces general fee of possession (TCO)
Order routing : Order routing is the technique by way of which an order is going from the stop consumer to an alternate. An order may work immediately to the change from the purchaser, or it may move first to a dealer who then routes the order to the trade.
Pricing : Pricing is the method wherein a commercial enterprise units the rate at which it's going to promote its services and products, and may be part of the commercial enterprise’s advertising and marketing plan.
Compliance Violations : compliance way conforming to a rule, inclusive of a specification, coverage, trendy or regulation. Regulatory compliance describes the intention that organizations aspire to gain of their efforts to make sure that they're privy to and take steps to conform with applicable laws and regulations. And any disturbance in regarding compliance is violations in compliance.
Apache Spark Interview Questions
Question 25. Can We Use Active Server Pages(asp) With Apache?
Apache Web Server package deal does now not encompass ASP guide. However, some of initiatives offer ASP or ASP-like capability for Apache.
Some of these are:
Apache:ASP :- Apache ASP gives Active Server Pages port to the Apache Web Server with Perl scripting most effective, and permits developing of dynamic net packages with session management and embedded Perl code. There also are many powerful extensions, along with XML taglibs, XSLT rendering, and new events not initially part of the ASP AP.
Mod_mono :- It is an Apache 2.Zero/2.2/2.4.3 module that gives ASP.NET aid for the internet’s preferred server, Apache. It is hosted internal Apache. Depending for your configuration, the Apache box could be one or a dozen of separate methods, all of these approaches will send their ASP.NET requests to the mod-mono-server method. The mod-mono-server procedure in flip can host more than one unbiased packages. It does this by means of the use of Application Domains to isolate the programs from every different, at the same time as the usage of a unmarried Mono virtual device.
Apache Kafka Tutorial
Question 26. Explain What Is Toplogy_message_timeout_secs In Apache Storm?
It is the most quantity of time allotted to the topology to absolutely technique a message released with the aid of a spout. If the message in no longer mentioned in given time body, Apache Storm will fail the message on the spout.
Apache Camel Interview Questions
Question 27. Mention The Difference Between Apache Kafka And Apache Storm?
Apache Kafka : It is a disbursed and robust messaging machine that may deal with large amount of information and lets in passage of messages from one end-factor to any other. Kafka is designed to allow a unmarried cluster to function the central facts backbone for a huge enterprise. It can be elastically and transparently improved without downtime. Data streams are partitioned and unfold over a cluster of machines to permit information streams larger than the functionality of any single machine and to permit clusters of coordinated customers.
Apache Storm : It is a real time message processing device, and you can edit or control statistics in actual-time. Storm pulls the statistics from Kafka and applies some required manipulation. It makes it easy to reliably process unbounded streams of statistics, doing real-time processing what Hadoop did for batch processing. Storm is easy, can be used with any programming language, and is lots of a laugh to use.
Apache Solr Interview Questions
Question 28. What Is Servertype Directive In Apache Server?
It defines whether or not Apache must spawn itself as a toddler method (standalone) or keep the whole thing in a single system (inetd). Keeping it inetd conserves sources.
The ServerType directive is included in Apache 1.3 for historical past compatibility with older UNIX-based totally version of Apache. By default, Apache is about to standalone server which means Apache will run as a separate software at the server. The ServerType directive isn’t available in Apache 2.Zero.
Apache Ant Tutorial
Question 29. In Which Folder Are Java Application Stored In Apache?
Java packages are not saved in Apache, it could be simplest connected to a different Java webapp hosting webserver the use of the mod_jk connector. Mod_jk is a substitute to the elderly mod_jserv. It is a very new Tomcat-Apache plug-in that handles the communique among Tomcat and Apache.
mod_jserv became too complicated : Because it turned into ported from Apache/JServ, it added with it lots of JServ unique bits that aren’t needed via Apache.
Mod_jserv supported simplest Apache : Tomcat helps many net servers through a compatibility layer named the jk library. Supporting distinct modes of labor have become complex in terms of assist, documentation and trojan horse fixes. Mod_jk should repair that.
The layered technique : supplied by using the jk library makes it easier to assist each Apache1.Three.X and Apache2.Xx.
Better assist for SSL : mod_jserv couldn’t reliably perceive whether or not a request become made via HTTP or HTTPS. Mod_jk can, the usage of the more recent Ajpv13 protocol.
Apache Tajo Interview Questions